July 03, 2015

Cioara's Cisco Blog
XKCD Comics

July 01, 2015

Evil Routers

OpenBSD Violates NTP Pool Guidelines

Note: When I began writing this, I was going to go into a lot more detail, explaining ntpd (the reference implementation), OpenNTPD, the NTP Pool Project, etc. I may do that in a follow-up post but, for now, I’ll keep it short and to the point.

The NTP Pool Project has a set of “basic guidelines” whose intended audience is “anyone distributing an appliance, operating system or some other kind of software using NTP”. This almost certainly includes OpenBSD (an operating system) which ships with OpenNTPD (an NTP implementation).

The NTP Pool Project‘s “Basic guidelines” clearly state:

“Do not use the standard pool.ntp.org names as a default configuration in your system.”

A little further down, to reiterate the importance of this, they again state (emphasis in the original):

“You must absolutely not use the default pool.ntp.org zone names as the default configuration in your application or appliance.”

The next sentence informs the reader, “You can apply for a vendor zone here on the site.”

Just like the label on a hair dryer that warns one not to use it while in the shower, these guidelines exist for a reason (cf. “Flawed Routers Flood University of Wisconsin Internet Time Server“, for example).

OpenBSD, however, has chosen to flagrantly ignore them.

OpenNTPD is included as part of OpenBSD’s “base system”. Version 1.13 (dated 2015/05/18) of it’s default configuration file, /etc/ntpd.conf, includes the following:


# use a random selection of NTP Pool Time Servers
# see http://support.ntp.org/bin/view/Servers/NTPPoolServers
servers pool.ntp.org

N.B.: This isn’t a recent change, either — it’s been there since 2004.

I was going to write to one of the OpenBSD mailing lists about this, thinking that perhaps it was simply an oversight. Before I did, however, I searched the archives to see if it had been discussed before. I discovered that it had — almost five years ago.

In that thread, Theo de Raadt wrote:

We don’t intend to change anything.

Their decrees are meaningless. They don’t provide the ntp traffic,
they only provide DNS records. As written, those rules are designed
to let the people at ntp.org impose a punishing policy against those
they don’t like.

Those rules do not improve time distribution. It is just control
freak behaviour.

This blatant violation of the guidelines is made even worse by what I discovered while writing this:


$ dig +short {0,1,2,3}.openbsd.pool.ntp.org
173.44.32.10
208.75.88.4
66.228.35.252
98.191.213.12
108.61.73.243
96.44.142.5
70.35.113.43
199.7.177.206
129.6.15.28
69.50.219.51
66.175.209.17
65.182.224.39
216.229.4.66
129.250.35.251
192.155.90.13
132.163.4.101

An “openbsd” vendor zone is already available for their use. Adherence to the guidelines would require, at minimum, a one-line change to /etc/ntpd.conf:


-servers pool.ntp.org
+servers 0.openbsd.pool.ntp.org

by Jeremy L. Gaddis at July 01, 2015 11:07 PM

Inevitable
Internetwork Expert Blog

Cisco Reverses CCIE Scheduling Policy Changes

As we reported last April, Cisco changed the CCIE Lab Exam retake policy to an exponential backoff, meaning that the more attempts you took at the lab the more time you had to wait between attempts.

In a sudden change of heart, today Cisco announced that they are reversing their policy change until at least December 31st 2015. Per Cisco:

“For a limited time, we will waive the current lab retake policy so that all lab candidates will be able to retest for their lab exam with only a 30-day wait period.” “If you register for any CCIE lab exam between now and December 31, 2015, you will have the option of retaking the exam with only a 30-day wait regardless of the number of attempts you may have already made.”

Frequently Asked Questions about the policy changes:

Q: Does this mean that between now and December 31, I can take the lab every 30 days?
A: Yes.

Q: Is the original policy back in place after December 31?
A: What happens after December 31 is dependent on the results of our research from now until that date.

Q: What does this mean if my current wait period is 90 days and I’m in the middle of the waiting period? Can I sign up now or do I have to continue to wait?
A: Yes, you can sign up now. You do not have to wait. The policy that is active at the time you schedule your lab will determine the time you have to wait. If you are beyond the 30-day wait period, you can book the earliest available seat you find.

Q: What if I’m already scheduled for a lab that I had to schedule out 90 days because of the original policy?
A: You will have the option to reschedule your lab attempt to an earlier date through the system.

by Brian McGahan, CCIE #8593, CCDE #2013::13 at July 01, 2015 04:11 PM

CCIE SPv4 Advanced Technologies Class Continues Today

INE’s CCIE Service Provider v4 Advanced Technologies Class continues today at 08:00 PDT (15:00 UTC) with Inter-AS MPLS L3VPN. All Access Pass subscribers can attend at http://live.INE.com. Recordings of some of the previous class sessions up to this point are now available via AAP library here.

Additionally, INE’s CCIE SPv4 Workbook is now available in beta format here.

Hope to see you in class!

by Brian McGahan, CCIE #8593, CCDE #2013::13 at July 01, 2015 02:34 PM

XKCD Comics

June 30, 2015

Cioara's Cisco Blog
Internetwork Expert Blog

CCIE RSv5 Lab Cram Session & New CCIE RSv5 Mock Labs Now Available

INE CCIE RSv5 Lab Cram Session is now available for viewing in our All Access Pass Library. This course includes over 35 hours of new content for CCIE Routing & Switching Version 5, including both technology review sessions as well as a step-by-step walkthrough of two new CCIE RSv5 Mock Lab Exams. These new Mock Labs are available here as part of INE’s CCIE RSv5 Workbook.

This class is designed as a last minute review of technologies and strategy before taking the actual CCIE RSv5 Lab Exam. Each of the two Mock Labs covered in class are subdivided into three sections – just like the actual exam – Troubleshooting, Diagnostics, and Configuration.

Rack rentals are available for these mock labs here. Technical discussion of the labs is through our Online Community, IEOC.

Happy Labbing!

by Brian McGahan, CCIE #8593, CCDE #2013::13 at June 30, 2015 04:06 PM

The Networking Nerd

Cisco and OpenDNS – The Name Of The Game?

SecureDNS

This morning, Cisco announced their intent to acquire OpenDNS, a security-as-a-service (SaaS) provider based around the idea of using Domain Naming Service (DNS) as a method for preventing the spread of malware and other exploits. I’ve used the OpenDNS free offering in the past as a way to offer basic web filtering to schools without funds as well as using OpenDNS at home for speedy name resolution when my local name servers have failed me miserably.

This acquistion is curious to me. It seems to be a line of business that is totally alien to Cisco at this time. There are a couple of interesting opportunities that have arisen from the discussions around it though.

Internet of Things With Names

The first and most obivious synergy with Cisco and OpenDNS is around Internet of Things (IoT) or Internent of Everything (IoE) as Cisco has branded their offering. IoT/IoE has gotten a huge amount of attention from Cisco in the past 18 months as more and more devices come online from thermostats to appliances to light sockets. The number of formerly dumb devices that now have wireless radios and computers to send information is staggering.

All of those devices depend on certain services to work properly. One of those services is DNS. IoT/IoE devices aren’t going to use pure IP to communicate with cloud servers. That’s because IoT uses public cloud offerings to communicate with devices and dashboards. As I said last year, capacity and mobility can be ensure by using AWS, Google Cloud, or Azure to host the servers to which IoT/IoE devices communicate.

The easiest way to communicate with AWS instances is via DNS. This ensures that a service can be mobile and fault tolerant. That’s critical to ensure the service never goes down. Losing your laptop or your phone for a few minutes is annoying but survivable. Losing a thermostat or a smoke detector is a safety hazard. Services that need to be resilient need to use DNS.

More than that, with control of OpenDNS Cisco now has a walled DNS garden that they can populate with Cisco service entries. Rather than allowing IoT/IoE devices to inherit local DNS resolution from a home ISP, they can hard code the DNS name servers in the device and ensure that the only resolution used will be controled by Cisco. This means they can activate new offerings and services and ensure that they are reachable by the devices. It also allows them to police the entries in DNS and prevent people from creating “workarounds” to enable to disable features and functions. Walled-garden DNS is as important to IoT/IoE as the walled-garden app store is to mobile devices.

Predictive Protection

The other offering hinted at in the acquistion post from Cisco talks about the professional offerings from OpenDNS. The OpenDNS Umbrella security service helps enterprises protect themselves from malware and security breaches through control and visibility. There is also a significant amount of security intelligence available due to the amount of traffic OpenDNS processes every day. This gives them insight into the state of the Internet as well as sourcing infection vectors and identifying threats at their origin.

Cisco hopes to utilize this predictive intelligence in their security products to help aid in fast identification and mitigation of threats. By combining OpenDNS with SourceFire and Ironport the hope is that this giant software machine will be able to protect customers even faster before they get exposed and embarrased and even sued for negligence.

The part that worries me about that superior predictive intelligence is how it’s gathered. If the only source of that information comes from paying OpenDNS customers then everything should be fine. But I can almost guarantee that users of the free OpenDNS service (like me) are also information sources. It makes the most sense for them. Free users provide information for the paid service. Paid users are happy at the level of intelligence they get, and those users pay for the free users to be able to keep using those features at no cost. Win/win for everyone, right?

But what happens if Cisco decides to end the free offering from OpenDNS? Let’s think about that a little. If free users are locked out from OpenDNS or required to pay even a small nominal fee, that means their source of information is lost in the database. Losing that information reduces the visibility OpenDNS has into the Internet and slows their ability to identify and vector threats quickly. Paying users then lose effectiveness of the product and start leaving in droves. That loss accelerates the failure of that intelligence. Any products relying on this intelligence also reduce in effectiveness. A downward spiral of disaster.


Tom’s Take

The solution for Cisco is very easy. In order to keep the effectiveness of OpenDNS and their paid intelligence offerings, Cisco needs to keep the free offering and not lock users out of using their DNS name servers for no cost. Adding IoT/IoE into the equation helps somewhat, but Cisco has to have the information from small enterprises and schools that use OpenDNS. It benefits everyone for Cisco to let OpenDNS operate just as they have been for the past few years. Cisco gains signficant intelligence for their security offerings. They also gain the OpenDNS customer base to sell new security devices to. And free users gain the staying power of a brand like Cisco.

Thanks to Greg Ferro (@EtherealMind), Brad Casemore (@BradCasemore) and many others for the discussion about this today.


by networkingnerd at June 30, 2015 01:46 PM

My Etherealmind

Musing: Virtual Appliances and Shorter Lifecycles

I’ve been writing and talking about the need for IT teams to reduce the lifecycle of infrastructure to 3 years. For this to happen, the following items: pay less for products so that money can be spent on projects to replace and upgrade pay less so that ROI can be achieved 3 years design so […]

The post Musing: Virtual Appliances and Shorter Lifecycles appeared first on EtherealMind.

by Greg Ferro at June 30, 2015 09:39 AM

June 29, 2015

Networking Now (Juniper Blog)

Whack-a-Hacker

JUN15154_Security_infographic_062615.jpg.jpegBeing a security professional these days may seem to some like a never ending game of Whack-a-Mole. Once one problem, vulnerability or intrusion is taken care of, it seems inevitable that another problem pops up that needs whacking into submission. 

by semo at June 29, 2015 06:00 PM

XKCD Comics

June 26, 2015

Network Design and Architecture

If the system lets you make the error, it is badly designed

Availability of  a system is mainly measured with two parameters. Mean time between failure (MTBF) and Mean time to repair (MTTR) MTBF is calculated as average time between failures of a system. MTTR is the average time required to repair a failed component (Link, node, device in networking terms) Operator mistakes is widely seen as… Read More »

The post If the system lets you make the error, it is badly designed appeared first on Network Design and Architecture.

by orhanergun at June 26, 2015 11:03 AM

My Etherealmind

Bi-Modal IT Bemusement – I Call It Project-Driven IT

I’ve been much amused byBi-Modal IT that Gartner coughed up a few months back. Bimodal IT refers to having two modes of IT, each designed to develop and deliver information- and technology-intensive services in its own way. Mode 1 is traditional, emphasizing scalability, efficiency, safety and accuracy. Mode 2 is nonsequential, emphasizing agility and speed. […]

The post Bi-Modal IT Bemusement – I Call It Project-Driven IT appeared first on EtherealMind.

by Greg Ferro at June 26, 2015 10:32 AM

XKCD Comics

June 24, 2015

Network Design and Architecture

Understanding the real problems for Network Design

Designers should be trained to understand the real problems. An excellent solution to the wrong problem is worse than no solution. As a designer, you shouldn’t start by trying to solve the problem given to you. You shouldn’t try to find a best design for the given problem.You should try to understand the real issues.… Read More »

The post Understanding the real problems for Network Design appeared first on Network Design and Architecture.

by orhanergun at June 24, 2015 01:05 PM

My Etherealmind
Network Design and Architecture

Do you really need Quality of Service ?

Quality of service (QoS) is the overall performance of a telephony or computer network, particularly the performance seen by the users of the network. Above is the Quality of Service definition from the Wikipedia. Performance metrics can be bandwidth, delay, jitter, pocket loss and so on. Two Quality Of Service approaches have been defined by… Read More »

The post Do you really need Quality of Service ? appeared first on Network Design and Architecture.

by orhanergun at June 24, 2015 11:52 AM

Potaroo blog

More Leaky Routes

Most of the time, mostly everywhere, most of the Internet appears to work just fine. Indeed, it seems to work just fine enough to the point that that when it goes wrong in a significant way then it seems to be fodder for headlines in the industry press. But there are some valuable lessons to be learned from these route leaks about approaches to routing security.

June 24, 2015 01:00 AM

XKCD Comics

June 23, 2015

The Data Center Overlords

The Cloud Is Now A Thing

In the networking world, we’re starting to see the term “cloud” more and more. When I teach classes, if I so much as mention the word cloud, I start to see some eyes roll. That’s completely understandable, as the term cloud was such an overused buzzword, only having been recently supplanted only by “software defined”.

Here’s real-life supervillain (dude owns an MiG 29 and an island with a volcano on it… seriously) Larry Ellison freaking out about the term cloud.

“It’s not water vapor! All it is, is a computer attached to a network!”

But here’s the thing, it’s actually a thing now. Rather than a catch-all buzzword, it’s being used more and more to define a particular type of operational model. And it’s defined by NIST, the National Institute of Standards and Technology, part of the US Department of Commerce. With the term cloud, we now get a higher degree of specificity.

The NIST definition of cloud is as follows:

  • On-demand self service
  • Broad network access
  • Resource pooling (multi-tenant)
  • Rapid Elasticity
  • Measured service

That first item on the list, the on-demand self service, is a huge change in how we will be doing networking. Right now network configurations are mostly done by network administrators. If you have a network need and aren’t a network admin, you open up a ticket and wait.

In (private) cloud computing, which will include a large networking component, the network elements, end points, and devices will be configured by end-users/developers, not the IT staff. The IT staff will maintain the overall cloud infrastructure, but will not do the day-to-day changes. The changes will happen far too frequently, and they will happen in the middle of the day. Change control will probably be handled for the underlying infrastructure, but the tenants will likely make many changes during the day. The fault domains will be a lot smaller, making mistakes impactful to a small segment for these changes, and the automation will make chance that a change (such as adding a new load balancing VIP) will be done correctly much higher.

This is how things have been done in public clouds (Amazon, Rackspace, etc.) for a while now.

When people talk about the death of the CLI, this is what they’re referring to. The configuration changes we make won’t be on a Cisco or Juniper CLI, but through some sort of portal (which can be either GUI, CLI, or API calls) and will be largely automated. We’ve hit the twilight of the age of Conf T.

With OpenStack, Docker, CoreOS, containers, DevOps, ACI, NSX, and all of the new operational models, technologies, and platforms, the next generation data center will be a self-service data center.


by tonybourke at June 23, 2015 11:38 PM

The Networking Nerd

The IPv6 Revolution Will Not Be Broadcast

IPv6Revolution

There are days when IPv6 proponents have to feel like Chicken Little. Ever since the final allocation of the last /8s to the RIRs over four years ago, we’ve been saying that the switch to IPv6 needs to happen soon before we run out of IPv4 addresses to allocate to end users.

As of yesterday, ARIN (@TeamARIN) has 0.07 /8s left to allocate to end users. What does that mean? Realistically, according to this ARIN page that means there are 3 /21s left in the pool. There are around 450 /24s. The availability of those addresses is even in doubt, as there are quite a few requests in the pipeline. I’m sure ARIN is now more worried that they have recieved a request that they can’t fulfill and it’s already in their queue.

The sky has indeed fallen for IPv4 addresses. I’m not going to sit here and wax alarmist. My stance on IPv6 and the need to transition is well known. What I find very interesting is that the transition is not only well underway, but it may have found the driver needed to see it through to the end.

Mobility For The Masses

I’ve said before that the driver for IPv6 adoption is going to be an IPv6-only service that forces providers to adopt the standard because of customer feedback. Greed is one of the two most powerful motivators. However, fear is an equally powerful motivator. And fear of having millions of mobile devices roaming around with no address support is an equally unwanted scenario.

Mobile providers are starting to move to IPv6-only deployments for mobile devices. T-Mobile does it. So does Verizon. If a provider doesn’t already offer IPv6 connectivity for mobile devices, you can be assured it’s on their roadmap for adoption soon. The message is clear: IPv6 is important in the fastest growing segment of device adoption.

Making mobile devices the sword for IPv6 adoption is very smart. When we talk about the barriers to entry for IPv6 in the enterprise we always talk about outdated clients. There are a ton of devices that can’t or won’t run IPv6 because of an improperly built networking stack or software that was written before the dawn of DOS. Accounting for those systems, which are usually in critical production roles, often takes more time than the rest of the deployment.

Mobile devices are different. The culture around mobility has created a device refresh cycle that is measured in months, not years. Users crave the ability to upgrade to the latest device as soon as it is available for sale. Where mobile service providers used to make users wait 24 months for a device refresh, we now see them offering 12 month refreshes for a significantly increased device cost. Those plans are booming by all indications. Users want the latest and greatest devices.

With the desire of users to upgrade every year, the age of the device is no longer a barrier to IPv6 adoption. Since the average age of devices in the wild is almost certain to be less than 3 years old providers can also be sure that the capability is there for them to support IPv6. That makes it much easier to enable support for it on the entire install base of handsets.

The IPv6 Trojan Horse

Now that providers have a wide range of IPv6-enabled devices on their networks, the next phase of IPv6 adoption can sneak into existence. We have a lot of IPv6-capable devices in the world, but very little IPv6 driven content. Aside from some websites being reachable over IPv6 we don’t really have any services that depend on IPv6.

Thanks to mobile, we have a huge install base of devices that we now know are IPv6 capable. Since the software for these devices is largely determined by the user base through third party app development, this is the vector for widespread adoption of IPv6. Rather than trumpeting the numbers, mobile providers and developers can quiety enable IPv6 without anyone even realizing it.

Most app resources must live in the cloud by design. Lots of them live in places like AWS. Service providers enable translation gateways at their edge to translate IPv6 requests into IPv4 requests. What would happen if the providers started offering native IPv6 connectivity to AWS? How would app developers react if there was a faster, native connetivity option to their resources? Given the huge focus on speed for mobile applications, do you think they would continue using a method that forces them to use slow translation devices? Or would they jump at the chance to speed up their devices?

And that’s the trojan horse. The app itself spurs adoption of IPv6 without the user even knowing what’s happened. When’s the last time you needed to know your IP on a mobile device? Odds are very good it would take you a while to even find out where that information is stored. The app-driven focus of mobile devices has eliminated the need for visibility for things like IP addresses. As long as the app connects, who cares what addressing scheme it’s using? That makes shifting the underlying infrastructure from IPv4 to IPv6 fairly inconsequential.


Tom’s Take

IPv6 adoption is going to happen. We’ve reached the critical tipping point where the increased cost of acquiring IPv4 resources will outweigh the cost of creating IPv6 connectivity. Thanks to the focus on mobile technologies and third-party applications, the IPv6 revolution will happen quietly at night when IPv6 connectivity to cloud resources becomes a footnote in some minor point update release notes.

Once IPv6 connectity is enabled and preferred in mobile applications, the adoption numbers will go up enough that CEOs focused on Gartner numbers and keeping up with the Joneses will finally get off their collective laurels and start pushing enteprise adoption. Only then will the analyst firms start broadcasting the revolution.


by networkingnerd at June 23, 2015 08:48 PM

My Etherealmind

Concerns about SD-WAN Standards and Interoperability

Ivan raises good points about SD-WAN and interoperability on his blog today. But I think the benefits of SD-WAN are too good to wait for ten years for standards to catch up. Oh, and its up to you to demand standards from the vendors.

The post Concerns about SD-WAN Standards and Interoperability appeared first on EtherealMind.

by Greg Ferro at June 23, 2015 05:00 PM