November 27, 2015

XKCD Comics

November 26, 2015

Network Design and Architecture

Black Friday Products 50% discounts

Hello everyone. I hope you are well. After this discount news I think you will be much better :)

Check the Products page for all the discounted products. You have many payment options and as soon as the transaction finishes you will get the download link.

You can download all resources !

Black Friday discounts will end by Friday night.

If you are considering to learn network design or want to attend CCDE , CCDP certifications I highly recommend to check the Products page regularly. If you subscribe to the email list , you will get notification first.

The post Black Friday Products 50% discounts appeared first on Network Design and Architecture.

by admin at November 26, 2015 10:34 AM

November 24, 2015

My Etherealmind

Briefing: Versa Networks – Managed Service SD-WAN

During the ONUG conference I attended a TechFieldDay event with Versa Networks and an introduction to their Service Provider SD-WAN product.

The post Briefing: Versa Networks – Managed Service SD-WAN appeared first on EtherealMind.

by Greg Ferro at November 24, 2015 09:43 PM

The Networking Nerd

A Voyage of Discover-E



I’m very happy to be attending the first edition of Hewlett-Packard Enterprise (HPE) Discover in London next week. I say the first edition because this is the first major event being held since the reaving of HP Inc from Hewlett-Packard Enterprise. I’m hopeful for some great things to come from this.

It’s The Network (This Time)

One of the most exciting things for me is seeing how HPE is working on their networking department. With the recent news about OpenSwitch, HPE is trying to shift the way of thinking about a switch operating system in a big way. To quote my friend Chris Young:

Vendors today spend a lot of effort re-writing 80% of their code and focus on innovating on the 20% that makes them different. Imagine how much further we’d be if that 80% required no effort at all?

OpenSwitch has some great ideas, like pulling everything from Open vSwitch as a central system database. I would love to see more companies use this model going forward. It makes a lot of sense and can provide significant benefits. Time will tell if other vendors recognize this and start using portions of OpenSwitch in their projects. But for now it’s interesting to see what is possible when someone takes a leap of open-sourced faith.

I’m also excited to hear from Aruba, a Hewlett-Packard Enterprise company (Aruba) and see what new additions they’ve made to their portfolio. The interplay between Aruba and the new HPE Networking will be interesting to follow. I have seen more engagement and discussion coming from HPE Networking now that Aruba has begun integrating themselves into the organization. It’s exciting to have conversations with people involved in the vendor space about what they’re working on. I hope this trend continues with HPE in all areas and not just networking.

Expected To Do Your Duty

HPE is sailing into some very interesting waters. Splitting off the consumer side of the brand does allow the smaller organization to focus on the important things that enterprises need. This isn’t a divestiture. It’s cell mitosis. The behemoth that was HP needed to divide to survive.

I said a couple of weeks ago:

To which it was quickly pointed out that HPE is doing just that. I agree that their effort is impressive. But this is the first time that HP has tried to cut itself to pieces. IBM has done it over and over again. I would amend my original statement to say that no company will be IBM again, including IBM. What you and I think of today as IBM isn’t what Tom Watson built. It’s the remnants of IBM Global Services with some cloud practice acquisitions. The server and PC business that made IBM a household name are gone now.

The lesson to HPE during as they try to find their identity in the new world post-cleaving is to remember what people liked about HP in the enterprise space and focus on keeping that goodwill going. Create a nucleus that allows the brand to continue to build and innovate in new and exciting ways without letting people forget what made you great in the first place.

Tom’s Take

I’m excited to see what HPE has in store for this Discover. There are no doubt going to be lots of product launches and other kinds of things to pique my interest about the direction the company is headed. I’m impressed so far with the changes and the focus back to what matters. I hope the momentum continues to grow into 2016 and the folks behind the wheel of the HPE ship know how to steer into the clear water of success. Here’s hoping for clear skies and calm seas ahead for the good ship Hewlett-Packard Enterprise!



by networkingnerd at November 24, 2015 05:21 PM

Renesys Blog

Explosions Leave Crimea in the Dark


Above photo credit:

Just after midnight local time on 22 November, saboteurs, presumably allied with Ukrainian nationalists, set off explosives knocking out power lines to the Crimean peninsula.  At 21:29 UTC on 21 November (00:29am on 22-Nov, local time) , we observed numerous Internet outages affecting providers in Crimea and causing significant degradation in Internet connectivity in the disputed region.

With Crimean Tatar activists and Ukrainian nationalists currently blocking repair crews from restoring power, Crimea may be looking at as much as a month without electricity as the Ukrainian winter sets in.  Perhaps more importantly, the incident could serve as a flash point spurring greater conflict between Ukraine and Russia.  ua_map


The impacts can be seen in the MRTG traffic volume plot from the Crimea Internet Exchange — the drop-offs are noted with red arrows and followed by intermittent periods of partial connectivity.

Dyn’s latency measurements into Miranda-Media, the Crimean local agent of Russian state operator Rostelecom, show that some parts of the network remain reachable despite the power loss.  However, while backup generators may be keeping the networking infrastructure online, it won’t be of much good for the people of Crimea if they have no power in their residences and places of work.  The following graphic on the left depicts traceroutes entering Crimea via either Rostelecom in Krasnodar, Russia or the Crimean Datagroup fiber network — a network that Rostelecom purchased last year following the annexation of Crimea.  The graphic on the right depicts traces into various Crimean Internet providers.

201776-3 201776b-2

The degree of service degradation varied by provider.  Crimea Minister of Internal Policy, Information and Communications, Dmitry Polonsky said that Krymtelekom was the only ISP still operational because it did not rely on power from the Ukrainian territory.  However, in following graphic on the left, we can see a significant reduction in the rate of completing traceroutes into Krymtelekom, suggesting considerable initial impact from the loss of power.  In the graphic on the right, KerchNET located on the eastern coast of Crimea, appeared severely degraded due to the power issues (right graphic).

47203-2 39047-2

Dependence in Mainland Ukraine

Recall that, following Russia’s annexation of Crimea from Ukraine in March, Prime Minister Dmitry Medvedev ordered the immediate construction of a new submarine cable across the Kerch Strait, one that would connect mainland Russia to the peninsula.  We spotted and reported on the activation of Kerch Strait cable in July of last year.

As illustrated in the maps below, the Crimean peninsula depends critically on the Ukrainian mainland for infrastructure services: power, water, gas and Internet — that was until the Kerch Strait Cable was activated, giving Crimea a new path to reach the global Internet.

Above images credit:

Russia has been working on an alternative route for Crimean electricity through Kerch, much as the Kerch Strait cable provides a redundant path for Internet service.  But that power cable is not planned to be operational until 22 December, almost a month away.  The image below shows a darkened Crimea as viewed from space.  This may be the picture of Crimea for days to come.


The post Explosions Leave Crimea in the Dark appeared first on Dyn Research.

by Doug Madory at November 24, 2015 02:16 PM

Network Design and Architecture

Routing over DMVPN

DMVPN Routing Considerations

Routing over DMVPN is probably the most important decision you should take for the VPN design.

Which routing protocol is suitable for your environment ? EIGRP over DMVPN , OSPF over DMVPN or BGP over DMVPN?

Let me just share some brief information about the routing protocol over the overlay tunnels in this post.

The best routing protocol over DMVPN is BGP or EIGRP for the large scale DMVPN deployments.

What is large depends on your links, stability, how is the redundancy, how many routes spokes have , which phase is in use and so on.

If you have 20.000 routes in total behind all your spokes, unless you are doing SLB design for your HUB, only BGP can support that number of routes.

As I stated earlier in this post, IS-IS cannot be used with DMVPN since it doesn’t run on top of IP. So forget about it!

OSPF can be used but has serious design limitations with DMVPN.

Since OSPF is a link state protocol, its operation doesn’t match with the DMVPN’s NBMA style.

OSPF Point to Multipoint network type is not supported over Phase 2, because HUB changes the next hop of the spokes to itself with P2MP OSPF network type in OSPF.

But in Phase 2, as I stated earlier in DMVPN article, HUB must preserve the next hop of spokes for the spoke to spoke direct tunnels.

If you use OSPF over Phase 2, the only options would remain either Broadcast or Non-Broadcast.

Since you need to specify each unicast neighbor manually for OSPF Non-Broadcast, you lose the ease of configuration benefit of DMVPN.

Phase 3 removes the point to multipoint network type limitation of OSPF but the problem is still that, OSPF requires all the nodes in one area to keep the same database and routing table.

If you design Multi-area OSPF, where would you put the ABR to limit the topology information?

If you put a non-backbone area on the LAN segments of the spokes and a backbone area on the tunnels, then Area 0 still would still have 2.000 spokes.

So failure on one spoke – link failure for example –  would cause all the spokes and hub to run full SPF.

With EIGRP and BGP this wouldn’t be a problem since EIGRP and BGP allows summarization at each node in the network.

OSPF allows inter-area summarization only on the ABR and external prefix summarization on the ASBR.

Even RIP can scale much better than OSPF in the DMVPN networks. Long live RIP ! :) 

NHRP is used in all Phases.

In Phase 1, spokes don’t use NHRP for next hop resolution but use NHRP for underlay to overlay address mapping registration.

Routing protocol neighborship is created only between Hub and Spokes. Not between Spokes!

On-demand tunnels are created to pass data plane traffic, not the control plane!

If you have more than one HUB, routing protocol neighborship is created between HUBs as well.

Spoke to spoke dynamic on demand tunnels are removed when traffic ceases.

Spoke to DMVPN HUB tunnels are always up.

DMVPN is very common in Enterprise networks. It is used as either a primary or a backup path.

Even if you enable QoS over overlay tunnels, if you use DMVPN over Internet, don’t forget that Underlay transport (Internet) is still the best effort ! 

If multi tenancy is necessary, VRF lite can be used with DMVPN.

With VRF-lite, MPLS is not needed to create an individual VPN. But VRF-lite has scalability problems.

For the large scale multi tenant deployment, 2547oDMVPN is an architecture where we use MPLS Layer 3 VPN over DMVPN networks. 

What about you? 

Are you using DMVPN in your network?

Which Phase is enabled?

Which routing protocol do you use on your DMVPN tunnels?

Are you using encryption?

Is it your primary or backup path?

Let’s discuss about your design in the comment box below so everyone can benefit from your knowledge.

The post Routing over DMVPN appeared first on Network Design and Architecture.

by admin at November 24, 2015 09:44 AM

Segment Routing Traffic Engineering

First, you need to remember MPLS-Traffic engineering operation.

MPLS-traffic engineering requires four steps, as shown below, for its operation.

  1. Link information such as bandwidth, IGP metric, TE metric, and SRLG is flooded throughout the IGP domain by the link state protocols.
  2. The path is calculated either with CSPF in a distributed manner  or with offline tools as a centralized fashion.
  3. If a suitable path is found, it is signalled via RSVP-TE and the RSVP assigns the label for the tunnels.
  4. The traffic is placed in the tunnels.

IP MPLS Traffic Engineering

In the diagram shown above – if the traffic flows between R1 and R2 when the packet travels to R2 – the IGP chooses the top path as the shortest path. This is because the cost of R2 to R5 through R3 is smaller than that of R2 to R5 through R6.

As you must have observed, R2-R6-R7-R4 link is not used during this operation.

With MPLS-traffic engineering, both the top and bottom path can be used.

The top path has high latency and high throughput path; as a result, it can be used for data traffic.

On the other hand, the bottom path has low latency, low throughput path, and expensive link; thus, it can be used for latency sensitive traffic, including voice and video.

To complete this operation, we need to create two MPLS-traffic engineering tunnels: one tunnel for data and the other tunnel for voice traffic. After doing that, we can place CBTS (Class based traffic selection) option of MPLS TE and voice traffic into voice LSP (TE tunnel). Next, we can identify data traffic and place it into LSP (TE tunnel).

How can we achieve the Traffic Engineering operation with Segment Routing?

segment routing traffic engineering


I have explained Node/Prefix SID in one of the previous sections.

Now, you know that Node/Prefix SID is assigned to the loopback addresses of all segment router enabled devices, and SID is unique in the routing domain.

Also, there is another SID type flooded with IGP packet.

Adjacency Segment ID

While Adjacency SID is unique to the local router, it is globally not as unique as Node/Prefix SID.

Routers automatically allocate an Adjacency Segment ID to their interfaces, especially when the segment routing is enabled on the device.

In the topology shown above, R2 allocates Adjacency SID to the interface of R6.

Label 22001 is the adjacency SID of R2 towards R3 interface, and it is used for steering traffic from the shortest path (perhaps, you do not desire to use only the shortest path).

Label 16005 is the Node/Prefix SID of R5.

If the packet is sent from R1 to R5 with two SID, 22001 and 16005 (since R2 usually send 22001 for its local adjacency), R1 will send the packet to R2; R2 will pop 22001, sending the remaining packet towards R6 with16005 – which is Node/Prefix SID of R5.

R6 will send the packet to R7 because it is the shortest path to R5.

Node/Prefix SID is used in the shortest path routing, and it has ECMP capability.

What’s more, Adjacency SID is used in explicit path routing.

NOTE: While Adjacency SID is used for Explicit Path Routing, Node/Prefix SID follows the shortest path.

I will provide more examples so that you can understand how to use node and Adjacency SID to provide an explicit path for the traffic flows.

node and adjacency segment id

Our aim is to send traffic between router A and router J; however, we do not want to use E-G link.

In this operation, we will use the A-C-E-F-H-J path.

To achieve our aim, we need to reach E. After that, we will divert the traffic to the E-F link. Next, F will transfer the traffic to J, which is the final destination.

Router A should put three label/Segment ID on the packet.

SID 1600, the first SID, will travel to router E.

The second SID is 16002, which is the Adjacency SID for the R2-R3 interface. This SID is unique, and it is known only by the ingress router, not by C.

The third SID is 16003, which is the Node/Prefix SID of Router J.

Router C receives the packet with three SID, pops the 16001, and sends the remaining two labels to router E.

Router E receives the packet with 16002 SID, which is the Adjacency SID towards router F. Thus, router E pops it, and sends the remaining packet to router F.

Router F receives the packet with SID 16003, which is the Node/Prefix SID of router J.

So, router F follows the shortest path, sending the packet to router H as well as swapping 16003 with 16003 without changing it.

If router J sends implicit null label, router H pops the 16003 and undergoes PHP, sending the IP packet to the router J.

If we want to carry out this operation using MPLS-TE, we can create an explicit path by providing ERO.

Also read : Segment routing fundamentals



The post Segment Routing Traffic Engineering appeared first on Network Design and Architecture.

by admin at November 24, 2015 09:18 AM

My Etherealmind

Pearson Gets Owned, Cisco Certifications Database Taken

In the "least surprising security breach" category, Pearson VUE got hacked and your personal details have been taken.

The post Pearson Gets Owned, Cisco Certifications Database Taken appeared first on EtherealMind.

by Greg Ferro at November 24, 2015 08:48 AM

XKCD Comics


@font-face { font-family: 'xkcd-Regular'; src: url('/fonts/xkcd-Regular.eot?') format('eot'), url('/fonts/xkcd-Regular.otf') format('opentype'); } #explore { position: relative; width: 740px; height: 700px; border: 2px solid black; font-family: xkcd-Regular; margin: 0 auto; } #explore:focus { box-shadow: 0 0 30px 3px #96a8c8; }

November 24, 2015 12:00 AM

November 23, 2015

The Data Center Overlords

Peak Fibre Channel

There have been several articles talking about the death of Fibre Channel. This isn’t one of them. However, it is an article about “peak Fibre Channel”. I think, as a technology, Fibre Channel is in the process of (if it hasn’t already) peaking.

There’s a lot of technology in IT that doesn’t simply die. Instead, it grows, peaks, then slowly (or perhaps very slowly) fades. Consider Unix/RISC. The Unix/RISC market right now is a caretaker platform. Very few new projects are built on Unix/RISC. Typically a new Unix server is purchased to replace an existing but no-longer-supported Unix server to run an older application that we can’t or won’t move onto a more modern platform. The Unix market has been shrinking for over a decade (2004 was probably the year of Peak Unix), yet the market is still a multi-billion dollar revenue market. It’s just a (slowly) shrinking one.

I think that is what is happening to Fibre Channel, and it may have already started. It will become (or already is) a caretaker platform. It will run the workloads of yesterday (or rather the workloads that were designed yesterday), while the workloads of today and tomorrow have a vastly different set of requirements, and where Fibre Channel doesn’t make as much sense.

Why Fibre Channel Doesn’t Make Sense in the Cloud World

There are a few trends in storage that are working against Fibre Channel:

  • Public cloud growth outpaces private cloud
  • Private cloud storage endpoints are more ephemeral and storage connectivity is more dynamic
  • Block storage is taking a back seat to object (and file) storage
  • RAIN versus RAID
  • IP storage is as performant as Fibre Channel, and more flexible

Cloudy With A Chance of Obsolescence

The transition to cloud-style operations isn’t a great for Fibre Channel. First, we have the public cloud providers: Amazon AWS, Microsoft Azure, Rackspace, Google, etc. They tend not to use much Fibre Channel (if any at all) and rely instead on IP-based storage or other solutions. And what Fibre Channel they might consume, it’s still far fewer ports purchased (HBAs, switches) as workloads migrate to public cloud versus private data centers.

The Ephemeral Data Center

In enterprise datacenters, most operations are what I would call traditional virtualization. And that is dominated by VMware’s vSphere. However, vSphere isn’t a private cloud. According to NIST, to be a private cloud you need to be self service, multi-tenant, programmable, dynamic, and show usage. That ain’t vSphere.

For VMware’s vSphere, I believe Fibre Channel is the hands down best storage platform. vSphere likes very static block storage, and Fibre Channel is great at providing that. Everything is configured by IT staff, a few things are automated though Fibre Channel configurations are still done mostly by hand.

Probably the biggest difference between traditional virtualization (i.e. VMware vSphere) and private cloud is the self-service aspect. This also makes it a very dynamic environment. Developers, DevOpsers, and overall consumers of IT resources configure spin-up and spin-down their own resources. This leads to a very, very dynamic environment.


Endpoints are far more ephemeral, as demonstrated here by Mr Mittens.

Where we used to deal with virtual machines as everlasting constructs (pets), we’re moving to a more ephemeral model (cattle). In Netflix’s infrastructure, the average lifespan of a virtual machine is 36 hours. And compared to virtual machines, containers (such as Docker containers) tend to live for even shorter periods of time. All of this means a very dynamic environment, and that requires self-service portals and automation.

And one thing we’re not used to in the Fibre Channel world is a dynamic environment.

scaredgifA SAN administrator at the thought of automated zoning and zonesets

Virtual machines will need to attach to block storage on the fly, or they’ll rely on other types of storage, such as container images, retrieved from an object store, and run on a local file system. For these reasons, Fibre Channel is not usually a consideration for Docker, OpenStack (though there is work on Fibre Channel integration), and very dynamic, ephemeral workloads.


Block storage isn’t growing, at least not at the pace that object storage is. Object storage is becoming the de-facto way to store the deluge of unstructured data being stored. Object storage consumption is growing at 25% per year according to IDC, while traditional RAID revenues seem to be contracting.

Making it RAIN


In order to handle the immense scale necessary, storage is moving from RAID to RAIN. RAID is of course Redundant Array of Inexpensive Disks, and RAIN is Redundant Array of Inexpensive Nodes. RAID-based storage typically relies on controllers and shelves. This is a scale-up style approach. RAIN is a scale-out approach.

For these huge scale storage requirements, such as Hadoop’s HDFS, Ceph, Swift, ScaleIO, and other RAIN handle the exponential increase in storage requirements better than traditional scale-up storage arrays. And primarily these technologies are using IP connectivity/Ethernet as the node-to-node and node-to-client communication, and not Fibre Channel. Fibre Channel is great for many-to-one communication (many initiators to a few storage arrays) but is not great at many-to-many meshing.

Ethernet and Fibre Channel

It’s been widely regarded in many circles that Fibre Channel is a higher performance protocol than say, iSCSI. That was probably true in the days of 1 Gigabit Ethernet, however these days there’s not much of a difference between IP storage and Fibre Channel in terms of latency and IOPS. Provided you don’t saturate the link (neither handles eliminates congestion issues when you oversaturate a link) they’re about the same, as shown in several tests such as this one from NetApp and VMware.

Fibre Channel is currently at 16 Gigabit per second maximum. Ethernet is 10, 40, and 100, though most server connections are currently at 10 Gigabit, with some storage arrays being 40 Gigabit. Iin 2016 Fibre Channel is coming out with 32 Gigabit Fibre Channel HBAs and switches, and Ethernet is coming out with 25 Gigabit Ethernet interfaces and switches. They both provide nearly identical throughput.

Wait, what?

But isn’t 32 Gigabit Fibre Channel faster than 25 Gigabit Ethernet? Yes, but barely.

  • 25 Gigabit Ethernet raw throughput: 3125 MB/s
  • 32 Gigabit Fibre Channel raw throughput: 3200 MB/s

Do what now?

32 Gigabit Fibre Channel isn’t really 32 Gigabit Fibre Channel. It actually runs at about 28 Gigabits per second. This is a holdover from the 8/10 encoding in 1/2/4/8 Gigabit FC, where every Gigabit of speed brought 100 MB/s of throughput (instead of 125 MB/s like in 1 Gigabit Ethernet). When FC switched to 64/66 encoding for 16 Gigabit FC, they kept the 100 MB/s per gigabit, and as such lowered the speed (16 Gigabit FC is really 14 Gigabit FC). This concept is outlined here in this screencast I did a while back. 16 Gigabit Fibre Channel is really 14 Gigabit Fibre Channel. 32 Gigabit Fibre Channel is 28 Gigabit Fibre Channel.

As a result, 32 Gigabit Fibre Channel is only about 2% faster than 25 Gigabit Ethernet. 128 Gigabit Fibre Channel (12800 MB/s) is only 2% faster than 100 Gigabit Ethernet (12500 MB/s).

Ethernet/IP Is More Flexible

In the world of bare metal server to storage array, and virtualization hosts to storage array, Fibre Channel had a lot of advantages over Ethernet/IP. These advantages included a fairly easy to learn distributed access control system, a purpose-built network designed exclusively to carry storage traffic, and a separately operated fabric.  But those advantages are turning into disadvantages in a more dynamic and scaled-out environment.

In terms of scaling, Fibre Channel has limits on how big a fabric can get. Typically it’s around 50 switches and a couple thousand endpoints. The theoretical maximums are higher (based on the 24-bit FC_ID address space) but both Brocade and Cisco have practical limits that are much lower. For the current (or past) generations of workloads, this wasn’t a big deal. Typically endpoints numbered in the dozens or possibly hundreds for the large scale deployments. With a large OpenStack deployment, it’s not unusual to have tens of thousands of virtual machines in a large OpenStack environment, and if those virtual machines need access to block storage, Fibre Channel probably isn’t the best choice. It’s going to be iSCSI or NFS. Plus, you can run it all on a good Ethernet fabric, so why spend money on extra Fibre Channel switches when you can run it all on IP? And IP/Ethernet fabrics scale far beyond Fibre Channel fabrics.

Another issue is that Fibre Channel doesn’t play well with others. There’s only two vendors that make Fibre Channel switches today, Cisco and Brocade (if you have a Fibre Channel switch that says another vendor made it, such as IBM, it’s actually a re-badged Brocade). There are ways around it in some cases (NPIV), though you still can’t mesh two vendor fabrics reliably.


Pictured: Fibre Channel Interoperability Mode

And personally, one of my biggest pet peeves regarding Fibre Channel is the lack of ability to create a LAG to a host. There’s no way to bond several links together to a host. It’s all individual links, which requires special configurations to make a storage array with many interfaces utilize them all (essentially you zone certain hosts).

None of these are issues with Ethernet. Ethernet vendors (for the most part) play well with others. You can build an Ethernet Layer 2 or Layer 3 fabric with multiple vendors, there are plenty of vendors that make a variety of Ethernet switches, and you can easily create a LAG/MCLAG to a host.


My name is MCLAG and my flows be distributed by a deterministic hash of a header value or combination of header values.

What About FCoE?

FCoE will share the fate of Fibre Channel. It has the same scaling, multi-node communication, multi-vendor interoperability, and dynamism problems as native Fibre Channel. Multi-hop FCoE never really caught on, as it didn’t end up being less expensive than Fibre Channel, and it tended to complicate operations, not simplify them. Single-hop/End-host FCoE, like the type used in Cisco’s popular UCS server system, will continue to be used in environments where blades need Fibre Channel connectivity. But again, I think that need has peaked, or will peak shortly.

Fibre Channel isn’t going anywhere anytime soon, just like Unix servers can still be found in many datacenters. But I think we’ve just about hit the peak. The workload requirements have shifted. It’s my belief that for the current/older generation of workloads (bare metal, traditional/pet virtualization), Fibre Channel is the best platform. But as we transition to the next generation of platforms and applications, the needs have changed and they don’t align very well with Fibre Channel’s strengths.

It’s an IP world now. We’re just forwarding packets in it.



by tonybourke at November 23, 2015 09:18 PM

My Etherealmind
Potaroo blog
XKCD Comics

November 22, 2015

Potaroo blog

IPv6 Performance

Every so often I hear the claim that some service or other does not support IPv6 not because of some technical issue, or some cost or business issue, but simply because the service operator is of the view that IPv6 offers an inferior level service as compared to IPv4, and by offering the service over IPv6 they would be exposing their clients to an inferior level of performance of the service. But is this really the case? Is IPv6 an inferior cousin of IPv4 in terms of service performance? In this article I'll report on the results of a large scale measurement of IPv4 and IPv6 performance, looking at the relativities of IPv6 and IPv4 performance.

November 22, 2015 06:30 PM

November 21, 2015

Network Design and Architecture

November CCDE Achievers

I am very proud to announce that Daniel Lardeux, Johnny Britt and Mohammad Haddad passed the CCDE Practical exam yesterday and they joined the CCDE Club, which is one of the most respected IT certifications.

Their CCDE numbers will arrive in a couple of days.

See the existing Global List of the CCDEs, their companies and numbers here. If you are not in the list, have changed your company or want to be on the list, contact me.

Daniel and Mohammad joined my July class and Johnny used the CCDE Practical preparation bundle.

I would like to stress that four guys from my class or using my preparation resources attempted the November 2015 CCDE Practical exam and three of them passed! A 75% success rate is not a small thing for this certification.

Their common idea is they don’t only learn the CCDE-related topics, but learn real life network design as well.

Below is Daniel thought’s about my class:

I attended the CCDE Class in April of 2015, and it was exactly what I needed.

Orhan took the time to break down the different technologies. Very useful, even for everyday work, and really helpful.

Time spent at the CCDE Class was also very incisive, showing me how to attack the exam.

Thank you Orhan with who I have been in contact throughout this quest. He always made himself available to answer any questions I had.

He is instrumental in my learning and helping me prepare for the CCDE, which is one of the most rigorous Network Design exams in the industry,

Thanks again Orhan.

Daniel Lardeux

Senior Network Consultant at Post Telecom PSF



If you would like to gain CCDE certification and also learn computer network design from the best, my recommendation is to get my CCDE preparation bundle, which consists of 60+ hours of videos, a CCDE practical workbook and attend the next CCDE class, which will be held in January.



Please note that there are many critical extra study resources that I recommend you study in my CCDE study book; that’s why it takes time to finish them before joining the class.

My videos are recorded from the class sessions, so you won’t see my face and you won’t see the student’s questions. That’s why joining the online class is important, in my opinion. Also, a video that will help you more focus will be available in a month (It will show my pretty face!).

The purpose of the recorded class videos in the CCDE Practical bundle is to give you an idea about real life design and CCDE exam.

In addition, you will find our talk with Russ White, who regularly joins as a guest in my class to help the students, in those videos.

You can register for the next CCDE class now. Early registration will give you a discount. I always limit the seats so that I can interact with my students better; that’s why I had to reject so many people in the past.

And I am serious: I am not saying this for marketing. That’s someone else’s job! WE are a designer only and if you want to be one too, send an email to

The post November CCDE Achievers appeared first on Network Design and Architecture.

by admin at November 21, 2015 04:44 PM

CCIE data center v2.0

If you’re studying for the CCIE Data center v.10 exam, it’ll be available until July 2016, after which time the recently announced CCIE DC v2.0 exam will take its place.

CCIE DC v2.0 will no longer include

  • Data Center application high availability and load balancing : ACE and WAAS.
  • Fiver Channel over IP

The following topics have been added to the CCIE DC v2.0 exam:

  • Implement and Troubleshoot Data Center Automation
  • Implement and Troubleshoot Data Center Orchestration Tools
  • Integrate Cisco Cloud Offerings into existing Data Center Infrastructure

New technologies now covered by the CCIE Data center exam include ACI, LISP, EVPN, and VxLAN. Hardware also takes a more prominent role and includes: Cisco Nexus 9300, Nexus 5600, Nexus 2300 Fabric Extender, UCS 4300 M Series Servers and APIC Cluster.See the below table for comparison CCIE Data center v1.0 and CCIE Data center v2.0.

See the table below for comparison between CCIE Data center v1.0 and v2.0 :

ccie data center new version

source :

A new Evolving Technologies domain will be added to the CCIE data center v2.0 exam. The section focuses on three subdomains: the Internet of Things, Network Programmability and the Cloud.

The format of the CCIE data center v2.0 lab exam is significantly different from the format of previous versions.

As part of a 60-minute diagnostic module (not included in the v1.0 exam), you will be provided with network topology diagrams, email-threads, console put and logs from which you have to find the root cause of a given issue, without device access.

This is very similar to CCDE Lab exam format, in which you’re also provided with network topology diagrams, email threads and different types of business and technical information. In this case, however, rather than troubleshooting, your task is to find the optimal design.

In order to pass the exam you must pass both the Diagnostic and the Troubleshooting & Configuration modules. In addition, the sum of the scores for both modules must be higher than the minimum value of the combined score.

CCIE Data Center Written and Lab Exam Content Updates :

ccie data center updates            source :

You still have plenty of time to prepare for the new version. And, until July 22, 2016, you also have the option of taking the older version of the CCIE data center exam. CCIE Data center v2.0 will be available from July 25, 2016.

These dates also apply to the CCIE data center written exam.

If you’re not sure whether you should start studying CCIE Data center or the CCDE exam and you’d like to join our discussion, please feel free to add your thoughts and questions to our comments section.

The post CCIE data center v2.0 appeared first on Network Design and Architecture.

by admin at November 21, 2015 03:46 PM

November 20, 2015

XKCD Comics

November 18, 2015

Internetwork Expert Blog

CCIE Data Center v2.0 Blueprint Announced

Cisco has just announced CCIE Data Center Written and Lab Exam Content Updates.Important dates for the changes are:

  • Last day to test for the v1.0 written – July 22, 2016
  • First day to test for the v2.0 written – July 25, 2016
  • Last day to test for the v1.0 lab – July 22, 2016
  • First day to test for the v2.0 lab – July 25, 2016

Key hardware changes in the v2.0 blueprint are:

  • APIC Cluster
  • Nexus 9300
  • Nexus 7000 w/ F3 Module
  • Nexus 5600
  • Nexus 2300 Fabric Extender
  • UCS 4300 M-Series Servers

Key technical topic changes in the v2.0 blueprint are:

  • EVPN
  • LISP
  • Policy Driven Fabric (ACI)

More details to come!

by Brian McGahan, CCIE #8593, CCDE #2013::13 at November 18, 2015 10:49 PM


Carrier Grade NAT and the DoS Consequences

Republished from Corero DDoS Blog:

The Internet has a very long history of utilizing mechanisms that may breathe new life into older technologies, stretching it out so that newer technologies may be delayed or obviated altogether. IPv4 addressing, and the well known depletion associated with it, is one such area that has seen a plethora of mechanisms employed in order to give it more shelf life.

In the early 90s, the IETF gave us Classless Inter-Domain Routing (CIDR), which dramatically slowed the growth of global Internet routing tables and delayed the inevitable IPv4 address depletion. Later came DHCP, another protocol which assisted via the use of short term allocation of addresses which would be given back to the provider's pool after use. In 1996, the IETF was back at it again, creating RFC 1918 private addressing, so that networks could utilize private addresses that didn't come from the global pool. Utilizing private address space gave network operators a much larger pool to use internally than would otherwise have been available if utilizing globally assigned address space -- but if they wanted to connect to the global Internet, they needed something to translate those addresses. This is what necessitated the development of Network Address Translation (NAT).

NAT worked very well for many, many years, and slowed the address depletion a great deal. But in order to perform that translation, you still needed to aquire at least one globally addressable IP. As such, this only served to slow down depletion but not prevent it - carriers were still required to provide that globally addressable IP from their own address space. With the explosive growth of the Internet of Things, carriers likewise began to run out of address space to allocate.

NAT came to the rescue again. Carriers took notice of the success of NAT in enterprise environments and wanted to do this within their own networks, after all, if it worked for customers it should likewise work for the carriers. This prompted the IETF to develop Carrier Grade NAT (CGN), also known as Large Scale NAT (LSN). CGN aims to provide a similar solution for carriers by obviating the need for allocating publicly available address space to their customers. By deploying CGN, carriers could oversubscribe their pool of global IPv4 addresses while still providing for seamless connectivity, i.e. no truck-roll.

So while the world is spared from address depletion yet again, the use of CGN technologies opens a new can of worms for carriers. No longer does one globally routable IP represent a single enterprise or customer - due to the huge oversubscription which is afforded through CGN, an IP can service potentially thousands of customers.

This brings us to the cross-roads of the Denial of Service (DoS) problem. In the past, when a single global IP represented only one customer network, there was typically no collateral damage to other customer networks. If the DoS was large enough to impact the carrier's network or if there was collateral damage, they would simply blackhole that customer IP to prevent it from transiting their network. However, with CGN deployments, and potentially thousands of customers being represented by a single IP, blackhole routing is no longer an option.

CGN deployments are vulnerable to DoS in a few different ways. The main issue with CGN is that it must maintain a stateful record of the translations between external addresses and ports with internal addresses and ports. A device which has to maintain these stateful tables is vulnerable to any type of DoS activity that may exhaust the stateful resources. As such, a CGN device may be impacted in both the inbound and the outbound direction. An outbound attack is usually the result of malware on a customers machine, sending a large amount of traffic towards the Internet and consuming the state tables in the CGN. Inbound attacks usually target a particular customer, and take the form of a DoS attack, or a Distributed Denial of Service (DDoS) attack. Regardless of the direction of the attack, a large amount of resources are consumed in the CGN state table, which reduces overall port availability. Left unregulated, these attacks can easily cause impact not only to the intended victim, but potentially the thousands of other customers being serviced by that CGN.

With the inability to simply blackhole a given IP using edge Access Control Lists (ACLs), carriers must look at other options for protecting their customer base. While some CGN implementations have the ability to limit the amount of ports that are allocated to a single customer, these only work in discrete cases and can be difficult to manage. They also do not protect customers if the CGN device is itself the target of the attack.

The solution to this problem is the use of a purpose-built DDoS mitigation device, or what is more commonly referred to as a "scrubbing" device in IT circles. Dedicated DDoS mitigation devices attempt to enforce that everyone plays nicely, by limiting the maximum number of sessions to or from a given customer. This is done by thorough analysis of the traffic in flight and rate-limiting or filtering traffic through sophisticated mitigation mechanisms to ensure fairness of the public IP and port availability across all customers. Through the use of dedicated DDoS mitigation devices, CGN devices and their associated customers are protected from service disruptions, while still ensuring legitimate traffic is allowed unencumbered. Lastly, another important aspect of DDoS mitigation technology is that they tend to be "bumps in a wire", that is to say, they don't have an IP address assigned to them and as such cannot be the target of an attack.

by Stefan Fouant at November 18, 2015 12:05 PM

XKCD Comics

November 17, 2015

The Networking Nerd

A Stack Full Of It


During the recent Open Networking User Group (ONUG) Meeting, there was a lot of discussion around the idea of a Full Stack Engineer. The idea of full stack professionals has been around for a few years now. Seeing this label applied to networking and network professionals seems only natural. But it’s a step in the wrong direction.

Short Stack

Full stack means having knowledge of the many different pieces of a given area. Full stack programmers know all about development, project management, databases, and other aspects of their environment. Likewise, full stack engineers are expected to know about the network, the servers attached to it, and the applications running on top of those servers.

Full stack is a great way to illustrate how specialized things are becoming in the industry. For years we’ve talked about how hard networking can be and how we need to make certain aspects of it easier for beginners to understand. QoS, routing protocols, and even configuration management are critical items that need to be decoded for anyone in the networking team to have a chance of success. But networking isn’t the only area where that complexity resides.

Server teams have their own jargon. Their language doesn’t include routing or ASICs. They tend to talk about resource pools and patches and provisioning. They might talk about VLANs or latency, but only insofar as it applies to getting communications going to their servers. Likewise, the applications teams don’t talk about any of the above. They are concerned with databases and application behaviors. The only time the hardware below them becomes a concern is when something isn’t working properly. Then it becomes a race to figure out which team is responsible for the problem.

The concept of being a full stack anything is great in theory. You want someone that can understand how things work together and identify areas that need to be improved. The term “big picture” definitely comes to mind. Think of a general practitioner doctor. This person understands enough about basic medical knowledge to be able to fix a great many issues and help you understand how your body works. There are quiet a few general doctors that do well in the medical field. But we all know that they aren’t the only kinds of doctors around.

Silver Dollar Stacks

Generalists are great people. They’ve spent a great deal of time learning many things to know a little bit about everything. I like to say that these people have mud puddle knowledge about a topic. It covers are broad area, but only a few inches deep. It can form quickly and evaporates in the same amount of time. Contrast this with a lake or an ocean, which covers a much deeper area but takes years or decades to create.

Let’s go back to our doctor example. General practitioners are great for a large percentage of simple problems. But when they are faced with a very specific issue they often call out to a specialist doctor. Specialists have made their career out of learning all about a particular part of the body. Podiatrists, cardiologists, and brain surgeons are all specialists. They are the kinds of doctors you want to talk to when you have a problem with that part of your body. They will never see the high traffic of a general doctor, but they more than make up for it in their own area of expertise.

Networking has a lot of people that cover the basics. There are also a lot of people that cover the more specific things, like MPLS or routing. Those specialists are very good a what they do because they have spent the time to hone those skills. They may not be able to create VLANs or provision ports as fast as a generalist, but imagine the amount of time saved when turning up a new MPLS VPN or troubleshooting a routing loop? That money translates into real savings or reduced downtime.

Tom’s Take

The people that claim that networking needs to have full stack knowledge are the kinds of folks further up the stack that get irritated when they have to explain what they want. Server admins don’t like knowing the networking jargon to ask for VLANs. Application developers want you to know what they mean when they say everything is slow. Full stack is just code for “learn about my job so I don’t have to learn about yours”.

It’s important to know about how other roles in the stack work in order to understand how changes can impact the entire organization. But that knowledge needs to be shared across everyone up and down the stack. People need to have basic knowledge to understand what they are asking and how you can help.

The next time someone tells you that you need to be a full stack person, ask them to come do your job for a day while you learn about theirs. Or offer to do their job for one week to learn about their part of the stack. If they don’t recoil in horror at the thought of you doing it, chance are they really want you to have a greater understanding of things. More likely they just want you to know how hard they work and why you’re so difficult to understand. Stop telling us that we need full stack knowledge and start making the stacks easier to understand.


by networkingnerd at November 17, 2015 06:39 PM

My Etherealmind

Nerdgasm: Karl Brumund – Building a Small DC… For the rest of us – RIPE71

I've just finished watching a talk at RIPE71 conference by Karl Brumund for Dyn about real-world experience of building a small-scale datacenter and using automation etc. etc. and it had a lot of great lessons. Really, just great.

The post Nerdgasm: Karl Brumund – Building a Small DC… For the rest of us – RIPE71 appeared first on EtherealMind.

by Greg Ferro at November 17, 2015 10:45 AM