TL;DR: I'll be in Bern on September 9th. If you'd like to drop by and discuss network design or automation challenges, read on…Read more ...
TL;DR: I'll be in Bern on September 9th. If you'd like to drop by and discuss network design or automation challenges, read on…Read more ...
This guest post is by Drew Conry-Murray, Director of Content & Community at Interop and a good friend of the Packet Pushers. SPECIAL NOTE: Interop is offering the Packet Pushers community a 25% discount on Total Access and Conference Passes or a FREE Expo Pass for the New York show. Register today with the code PACKETP to receive the discount. The […]
Containers virtualize at the operating system level, Hypervisors virtualize at the hardware level. Hypervisors abstract the operating system from hardware, containers abstract the application from the operation system. Hypervisors consumes storage space for each instance. Containers use a single storage space plus smaller deltas for each layer and thus are much more efficient. Containers can boot and be […]
A chance dinner conversation at Wireless Field Day 7 with George Stefanick (@WirelesssGuru) and Stewart Goumans (@WirelessStew) made me think about the implications of IPv6 in healthcare. IPv6 adoption hasn’t been very widespread, thanks in part to the large number of embedded devices that have basic connectivity. Basic in this case means “connected with an IPv4 address”. But that address can lead to some complications if you aren’t careful.
In a hospital environment, the units that handle medicine dosing are connected to the network. This allows the staff to program them to properly dispense medications to patients. Given an IP address in a room, staff can ensure that a patient is getting just the right amount of painkillers and not an overdose. Ensuring a device gets the same IP each time is critical to making this process work. According to George, he has recommended that the staff stop using DHCP to automatically assign addresses and instead move to static IP configuration to ensure there isn’t a situation where a patient inadvertently receives a fatal megadose of medication, such as when an adult med unit is accidentally used in a pediatric application.
This static policy does lead to network complications. Units removed from their proper location are rendered unusable because of the wrong IP. Worse yet, since those units don’t check in with the central system any more, they could conceivably be incorrectly configured. At best this will generate a support call to the IT staff. At worst…well, think lawsuit. Not to mention what happens if there is a major change to gateway information. That would necessitate massive manual reconfiguration and downtime until those units can be fixed.
Cut Me Some SLAAC
This is where IPv6 comes into play, especially with Stateless Address Auto Configuration (SLAAC). By using an automatically configured address structure that never changes, this equipment will never go offline. It will always be checked in on the network. There will be little chance of the unit dispensing the wrong amount of medication. The medical unit will have history available via the same IPv6 address.
There are challenges to be sure. IPv6 support isn’t cheap or easy. In the medical industry, innovation happens at a snail’s pace. These devices are just now starting to have mobile connectivity for wireless use. Asking the manufacturers to add IPv6 into their networking stacks is going to take years of development at best.
Having the equipment attached all the time also brings up issues with moving the unit to the wrong area and potentially creating a fatal situation. Thankfully, the router advertisements can help there. If the RA for a given subnet locks the unit into a given prefix, controls can be enacted on the central system to ensure that devices in that prefix range will never be allowed to dispense medication above or below a certain amount. While this is more of a configuration on the medical unit side, IPv6 provides the predictability needed to ensure those devices can be found and cataloged. Since a SLAAC addressed device using EUI-64 will always get the same address, you never have to guess which device got a specific address. You will always know from the last 64 bits which device you are speaking to, no matter the prefix.
Healthcare is a very static industry when it comes to innovation. Medical companies are trying to keep pace with technology advances while at the same time ensuring that devices are safe and do not threaten the patients they are supposed to protect. IPv6 can give us an extra measure of safety by ensure devices receive the same address every time. IPv6 also gives the consistency needed to compile proper reporting about the operation of a device and even the capability of finding that device when it is moved to an improper location. Thanks to SLAAC and IPv6, one day these networking technologies might just save your life.
In Part 2 we did the initial ISATAP configuration for our Cisco router. Here we’ll show the config we use on our Windows clients and server. netsh interface isatap set router 203.0.113.30 netsh interface isatap set state enabled Normally I tell system admins to never hard-code IP addresses into their application; always use DNS names! […]
A while ago I wrote about the idea of treating network infrastructure (and all other infrastructure) as code, and using the same processes application developers are using to write, test and deploy code to design and implement networks.
That approach clearly works well if you can virtualize (and clone ad infinitum) everything. We can virtualize appliances or even routers, but installed equipment and high-speed physical infrastructure remain somewhat resistant to that idea. We need a different paradigm, and the best analogy I could come up with is a database.Read more ...
A reader sent me this question:
My company will have 10GE dark fiber across our DCs with possibly OTV as the DCI. The VM team has also expressed interest in DC-to-DC vMotion (<4ms). Based on your blogs it looks like overall you don't recommend long-distance vMotion across DCI. Will the "Data Center trilogy" package be the right fit to help me better understand why?
Unfortunately, long-distance vMotion seems to be a persistent craze that peaks with a predicable period of approximately 12 months, and while it seems nothing can inoculate your peers against it, having technical arguments might help.Read more ...
Notes on the CheckPoint firewall clustering solution based on a review of the documentation in August 2014.
The post Tech Notes: CheckPoint Firewall Cluster XL in 2014 appeared first on EtherealMind.
My good friend Tiziano complained about the fact that BGP considers next hop unreachable if there’s an entry in the IP routing table even though the router cannot even ping the next hop.
That behavior is one of the fundamental aspects of IP networks: networks built with IP routing protocols rely on fate sharing between control and data planes instead of path liveliness checks.Read more ...
‘ On Earth Day at 1990 , New York City’s Transportation Commissioner decided to close 42d Street , which as every New Yorker knows is always congested. “Many predicted it would be doomsday,” said the Commissioner, Lucius J. Riccio. “You didn’t need to be a rocket scientist or have a sophisticated computer queuing model to […]
He has more than 10 years in IT, and has worked on many network design and deployment projects.
In addition, Orhan is a:
Blogger at Network Computing.
Blogger and podcaster at Packet Pushers.
Manager of Google CCDE Group.
On Twitter @OrhanErgunCCDE
After a week of testing, I decided to move the main ipSpace.net web site (www.ipspace.net) as well as some of the resource servicing hostnames to CloudFlare CDN. Everything should work fine, but if you experience any problems with my web site, please let me know ASAP.
2014-08-27: Had to turn off CloudFlare (and thus IPv6). They don't seem to support HTTP range requests, which makes video startup time unacceptable. Will have to move all video URLs (where the HTTP range requests are expected coming from streaming clients) to a different host name, which will take time.
Collateral benefit: ipSpace.net is now fully accessible over IPv6 – register for the Enterprise IPv6 101 webinar if you think that doesn’t matter ;)
I first met Elisa Jasinska when she had one of the coolest job titles I ever saw: Senior Packet Herder. Her current job title is almost as cool: Senior Network Toolsmith @ Netflix – obviously an ideal guest for the Software Gone Wild podcast.
One of the confusing aspects of Internet operation is the difference between the types of providers and the types of peering. There are three primary types of peering, and three primary types of services service providers actually provide. The figure below illustrates the three different kinds of peering. One provider can agree to provide transit […]
Building a private cloud infrastructure tends to be a cumbersome process: even if you do it right, you oft have to deal with four to six different components: orchestration system, hypervisors, servers, storage arrays, networking infrastructure, and network services appliances.Read more ...
Last week in Chicago, at the annual SIGCOMM flagship research conference on networking, Arbor collaborators presented some exciting developments in the ongoing story of IPv6 roll out. This joint work (full paper here) between Arbor Networks, the University of Michigan, the International Computer Science Institute, Verisign Labs, and the University of Illinois highlighted how both the pace and nature of IPv6 adoption has made a pretty dramatic shift in just the last couple of years. This study is a thorough, well-researched, effective analysis and discussion of numerous published and previously unpublished measurements focused on the state of IPv6 deployment.
The study examined a decade of data reporting twelve measures drawn from ten global-scale Internet datasets, including several years of Arbor data that represents a third to a half of all interdomain traffic. This constitutes one of the longest and broadest published measurement of IPv6 adoption to date. Using this long and wide perspective, the University of Michigan, Arbor Networks, and their collaborators found that IPv6 adoption, relative to IPv4, varies by two orders of magnitude (100x!) depending on the measure one looks at and, because of this, care must really be taken when looking at individual measurements of IPv6. For example, examining only the fraction of IPv6 to IPv4 traffic, which is still just shy of 1%, is misleading, since virtually all other indicators show that IPv6 is much more ready for use and able to grow very quickly.
In the study, differences in IPv6 deployment across global regions were also apparent. This suggests that both the incentives and obstacles to adopt the new protocol vary in different parts of the world.
Most surprisingly, the team found that over the last three years the nature of IPv6 use, in terms of traffic, content, reliance on transition technology, and performance, has shifted dramatically from prior findings, showing a maturing of the protocol into production mode. For instance, Arbor data shows that the increase in IPv6 traffic relative to IPv4 over each of 2012 and 2013 has been phenomenal, growing more than 400% in each year — a more than quintupling. Arbor data also helped show that *how* people are using IPv6 has likewise evolved immensely, to the point where IPv6 is now largely used natively and mostly for content, neither of which was the case just three years ago.
Interestingly, this study offers a thought-provoking rationale for the high incidence of NNTP and rsync in the IPv6 application mix. Based on the data, the high volumes of NNTP and rsync is likely partially due to synchronization of NNTP and software distribution data between a relatively small number of IPv6-enabled servers that resided within the research and education communities. The significant increase of HTTP and HTTPS traffic in the IPv6 application mix could correlate with a much broader increase of IPv6-connected end-user computers accessing IPv6-enabled web servers.
These changes in adoption rate and the nature of IPv6 use come on the heels of several important IPv4 exhaustion milestones (such as the IANA address depletion event), which began in 2011. Thus, the team believes that this new phase of IPv6 rollout might have been spurred, in part, by a growing shortage of IPv4 addressing.
The study’s conclusions regarding the prevalence of untunneled native IPv6 traffic in today’s Internet are significant in that they imply a level of infrastructure readiness for IPv6. Transition technologies played an important “early adopter” role in the evolution of IPv6 technology and it now appears that IPv6 deployment has entered a stage where Internet infrastructures can support native IPv6 traffic.
In closing, the team noted that, together, IPv6′s very fast recent growth and how its use has shifted signal a true quantum leap. Twenty years after it was standardized, it looks like IPv6 is finally becoming real.
For the full presentation shared at SIGCOMM, click here or on the image below to download.
Many thanks to Jakub Czyz, Scott Iekel-Johnson, Bill Cerveny and Roland Dobbins for assistance with this post!
The Moscone Center in San Francisco is a popular place for technical events. Apple’s World Wide Developer Conference (WWDC) is an annual user of the space. Cisco Live and VMworld also come back every few years to keep the location lively. This year, both conferences utilized Moscone to showcase tech advances and foster community discussion. Having attended both this year in San Francisco, I think I can finally state the following with certainty.
It’s time for tech conferences to stop using the Moscone Center.
Let’s face it. If your conference has more than 10,000 attendees, you have outgrown Moscone. WWDC works in Moscone because they cap the number of attendees at 5,000. VMworld 2014 has 22,000 attendees. Cisco Live 2014 had well over 20,000 as well. Cramming four times the number of delegates into a cramped Moscone Center does not foster the kind of environment you want at your flagship conference.
The main keynote hall in Moscone North is too small to hold the large number of audience members. In an age where every keynote address is streamed live, that shouldn’t be a problem. Except that people still want to be involved and close to the event. At both Cisco Live and VMworld, the keynote room filled up quickly and staff were directing the overflow to community spaces that were already packed too full. Being stuffed into a crowded room with no seating or table space is frustrating. But those are just the challenges of Moscone. There are others as well.
I Left My Wallet In San Francisco
San Francisco isn’t cheap. It is one of the most expensive places in the country to live. By holding your conference in downtown San Francisco, you are forcing your 20,000+ attendees into a crowded metropolitan area with expensive hotels. Every time I looked up a hotel room in the vicinity of VMworld or Cisco Live, I was unable to find anything for less than $300 per night. Contrast that with Interop or Cisco Live in Las Vegas, where sub-$100 are available and $200 per night gets you into the hotel of the conference center.
Las Vegas is built for conferences. It has adequate inexpensive hotel options. It is designed to handle a large number of travelers arriving at once. While spread out geographically, it is easy to navigate. In fact, except for the lack of Uber, Las Vegas is easy to get around in than San Francisco. I never have a problem finding a restaurant in Vegas to take a large party. Bringing a group of 5 or 6 to a restaurant in San Francisco all but guarantees you won’t find a seat for hours.
The only real reason I can see for holding conferences at Moscone, aside from historical value, is the ease of getting materials and people into San Francisco. Cisco and VMware both are in Silicon Valley. Driving up to San Francisco is much easier than shipping the conference equipment to Las Vegas or Orlando. But ease-of-transport does not make it easy on your attendees. Add in the fact that the lower cost of setup is not reflected in additional services or reduced hotel rates and you can imagine that attendees have no real incentive to come to Moscone.
The Moscone Center is like the Cotton Bowl in Dallas. While both have a history of producing wonderful events, both have passed their prime. They are ill-suited for modern events. They are cramped and crowded. They are in unfavorable areas. It is quickly becoming more difficult to hold events for these reasons. But unlike the Cotton Bowl, which has almost 100 years of history, Moscone offers not real reason to stay. Apple will always be here. Every new iPhone, Mac, and iPad will be launched here. But those 5,000 attendees are comfortable in one section of Moscone. Subjecting your VMworld and Cisco Live users to these kinds of conditions is unacceptable.
It’s time for Cisco, VMware, and other large organizations to move away from Moscone. It’s time to recognize that Moscone is not big enough for an event that tries to stuff in every user it can. instead, conferences should be located where it makes sense. Las Vegas, San Diego, and Orlando are conference towns. Let’s use them as they were meant to be used. Let’s stop the madness of trying to shoehorn 20,000 important attendees into the sardine can of the Moscone Center.
VMware announced the vCloud Hosted Services a while back and it was mostly known as vCheese for short. This week it was rebranded as "vCloud Air Network" and that is too much of a mouthful to keep saying as well. Don't these marketing people live in the real world ? Lets me share my suggestion .......
A few days ago I had an interesting interview with Christoph Jaggi discussing the challenges, changes in mindsets and processes, and other “minor details” one must undertake to gain something from the SDDC concepts. The German version of the interview is published on Inside-IT.ch; you’ll find the English version below.Read more ...
Nexus 1000V release 5.2(1)SV3(1.1) was published on August 22nd (I’m positive that has nothing to do with VMworld starting tomorrow) and I found this gem in the release notes:
Enabling BPDU guard causes the Cisco Nexus 1000V to detect these spurious BPDUs and shut down the virtual machine adapters (the origination BPDUs), thereby avoiding loops.