June 21, 2018

Networking Now (Juniper Blog)

Windows Exploitation and AntiExploitation Evolution


Co-author:  Manoj Ahuje





Windows has been a target of hackers for a long time. The reason is simple: it is the OS with the largest deployment footprint around the world. Hackers discover vulnerabilities in Windows OS software as well as the various software that support Windows OS. Exploits as well as exploit mitigation techniques have evolved over the years. Over time, the targeted exploit has shifted from server applications to desktop applications. Adobe Flash has been a favorite target for the past seven to eight years. In this article we will talk about the exploits and their mitigation techniques on Windows.


Exploits are meant to compromise the system by taking advantage of a vulnerability in the program. Before moving ahead, let us first discuss how hackers find a vulnerability in an application.


Common vulnerability discovery techniques:

Fuzzing: This is a common way to test for vulnerabilities. The hackers provide a large range of inputs for the target software and observe for any unexpected behavior. For browsers, a lot of HTML files are created as an input. If the vulnerability has not been seen earlier, it is called a zero day vulnerability.


Patch diffing: This is a technique by which an update in software is compared with an old version. Windows used to update its software by means of DLLs. Hackers can compare the old and new DLL to find out what code is patched. After this, they can analyze it to find out if the old code is vulnerable. Hackers write the exploit to take advantage of the vulnerability assuming that not everybody would have applied the new software patch immediately. These kind of exploits are called one day exploits.



There are certain organizations that discover zero day vulnerabilities for commercial software testing. Hacking Team was one of these. Hacking Team exploits were leaked in 2015. Eternal Blue is also one of such infamous exploits that contributed to the spread of WannaCry and Petya malware



Types of Vulnerabilities and Windows countermeasures

Buffer Overflow vulnerabilities: These kinds of vulnerabilities occur when a program doesn’t check the boundary of user-supplied data. Stack overflow and heap overflow are the most common types of buffer overflow. Windows has patched almost all known and exploitable buffer overflow vulnerabilities and it is rare to see a new one discovered these days.

Other than patching the vulnerabilities in the software, Windows also developed some techniques to mitigate buffer overflows.


A Stack is a a very important data structure used in programming. It is used to store local variables and the function return address. Local variables can store user supplied data. As an example, HTTP server can store the HTTP request sent by the user in a variable. If the server does not check the size of the supplied HTTP request, it gets written to the stack where it can corrupt other data like the return address. The return address can be overwritten to redirect the control flow to a shellcode which is part of the user supplied variable value. A shellcode can pop a backdoor, launch a program or download another malware. Stack overflow exploitation works by overwriting the return address on the stack by a user supplied local variable.


Windows came up a with stack canaries to detect overwriting of the return address. The stack canaries feature was implemented in Windows XP SP2.  An exception was triggered when the return address was overwritten. Unfortunately, hackers bypassed this too using a technique called SEH Overwrite (Structured Exception Handler). The address of the exceptional handler needed to handle the return address overwrite exception was stored on the stack. Hence, the attacker was able to take control by overwriting this value.


Heap overflow also worked in a similar manner. User supplied data was able to corrupt important data structures on the heap to take control of the program. To track heap allocation, linked lists are used. The pointers used to connect node in the linked list are corrupted and overwritten by user controlled code.


Windows invented a technique called DEP (Data Execution Prevention) to prevent this kind of exploitation. The technique imposed a restriction that anything on the stack or heap should be treated as data and should not be executed.


                                                                                         DEP settings on windows






In order to bypass DEP, attackers created the shellcode by using codes in DLLs, which are supposed to execute. The shellcode is composed of addresses of instructions inside a DLL instead of only instructions. This technique is called ROP chain (return oriented programming). The ROP technique executes the code chunks that span across multiple DLLs to carry out the same functionality, which could have also been achieved by executing a contiguous chunk of shellcode placed in a heap or stack in absence of DEP. The code chunks from the DLLs used in ROP are called ROP gadgets. It was possible to create a ROP chain by using addresses inside a loaded DLL as the DLL addresses were fixed. To mitigate this ROP, Windows introduced another technique called ASLR (Address Space Layout Randomization). This technique randomized the base addresses of DLL with respect to the main executable each time the program started. This technique was quite effective with minimal flaws. The attacker used the non-randomized DLLs to create exploits. Attackers sometimes used another technique called information disclosure or information leak to find out the address of a DLL and then use the address in the exploit.


Now, stack and heap overflow are rarely seen in popular applications. It seems that most companies have done a good job on educating engineers about basic vulnerabilities. Use-After-Free has been the most recent popular vulnerability. It’s one of the most exploited vulnerabilities in the past five to six years.It was found in all browsers, Adobe Reader and Adobe Flash applications. Use-After-Free is a memory corruption vulnerability. The vulnerability is triggered when the program tries to access an object that has been freed. The object has been freed, but there was still a pointer that pointed to the memory location of the freed object. Trying to access the data pointed to the leads of the vulnerability. This kind of pointer is called a dangling pointer. Attackers can misuse the pointer to execute shellcode. Microsoft introduced a technique called isolated heap in mid-2014 to minimize use after free vulnerabilities. Isolated heap allocated separate heap for critical objects. The heap blocks are freed from user controlled data after the object is freed. Isolated heap helped to prevent exploitation of use-after-free vulnerabilities, but isolated heap was applied only to selective objects, not all objects. So, some objects might still be subject to risk. To further elevate the security, Microsoft added Protected Free or Deferred Free. In this technique, Microsoft does not free the object immediately. Instead, it frees it sometime later so that the attacker cannot predict the time when they can control the freed object.


Another very popular technique used to exploit browsers is heap spray. JavaScript used in browsers stores variables in the heap. The heap spray technique was used to fill the heap with lots of shellcode chunks. The advantage of this technique is that the attacker does not need to accurately predict the address of shellcode on heap. An address like 0xaaaaaaaa ,0xbbbbbbbbb is used, which most likely points to the heap and the shellcode would then probably lie around that. Chromium sandbox was one of Google Chrome’s innovations to counter all kinds of browser exploitation. Internet Explorer also came up with a similar solution.


But, still the DEP and ASLR was an issue. Hence, the novel technique JIT spray (Just In Time) was used. The technique was mostly used to exploit Flash-related vulnerabilities. JIT Engine is a native code generator, which is used by all modern browsers to speed up execution by parsing, optimizing and compiling bytecode to native code for the machine to run. The emitted code by JIT Engine is marked as executable in memory by default. This code can be sprayed into the heap by calculating the right size allocation of a page. After that, it’s a matter of jumping to a known heap address to get code execution bypassing DEP and ASLR all together. This bytecode can be generated in real time by JIT or it can be pre-generated and sent on the wire as a Flash/Java file on the internet. This technique was first used with Flash ActionScripts to spray heap, which used long XOR sequences to store shellcode and jumped onto a known heap address to get reliable code execution. After researchers were able to use the same JIT spray technique to exploit native JAVA and also used ASM.js recently for exploitation.


To keep up with the continuously evolving exploit landscape, Microsoft came up with EMET (Enhanced Mitigation Experience Toolkit) in 2009. It is an added layer of defense against code reuse attacks like ROP and provides better protection against Heap and JIT spray. Though it has to be manually installed by an administrator. Since EMET was not designed as an integral part of the OS, the exploit writers were able to bypass/disable EMET and achieve code execution in each of its versions. Though EMET led many security innovation in Windows product lines 7, 8 ,8.1, 10 and it’s Linux counterparts. As underlying OS changed, Microsoft decided to build in this security in OS as Windows Defender Exploit Guard, which will support future Windows versions after Windows 10, with many improvements. Microsoft decided to end life support for EMET. The latest version will be EOL’d on July 31, 2018.



Exploitation and Anti-Exploitation techniques is almost a cat and rat race. Security professionals always come up with an idea to combat Cyber criminals but at the same time criminals figure out a way to defeat it. Juniper Advanced threat detection product detects exploits to keep the customers safe.



by amohanta at June 21, 2018 04:06 AM

June 20, 2018

ipSpace.net Blog (Ivan Pepelnjak)

Worth Reading: Fake News in IT

Stumbled upon “Is Tech News Fake” article by Tom Nolle. Here’s the gist of his pretty verbose text:

When readers pay for news, they get news useful to readers.  When vendors pay, not only do the vendors get news they like, the rest of us get that same story.  It doesn’t mean that the story being told is a lie, but that it reflects the view of an interested party other than the reader.

High-quality content is not cheap, so always ask yourself: who’s paying for the content… and if it’s not you, you may be the product.

Full disclosure: ipSpace.net is funded exclusively with subscriptions and online courses. Some of our guest speakers work for networking vendors, but we always point that out, and never get paid for that.

by Ivan Pepelnjak (noreply@blogger.com) at June 20, 2018 09:20 AM

XKCD Comics

June 19, 2018

ipSpace.net Blog (Ivan Pepelnjak)

Presentation: Three Paths of Enterprise IT

During last week’s SIGS Technology Conference I had a keynote presentation about the three paths of enterprise IT.

Unfortunately, the event wasn’t recorded, but you can view the presentation here. Contact me if you have any questions, or Irena if you'd like to have a similar keynote for your event.

by Ivan Pepelnjak (noreply@blogger.com) at June 19, 2018 07:53 AM

June 18, 2018

My Etherealmind

Off to the Kubernetes – Networking in a Post VM world

Few people are using containers so why are all the vendors into it ?

by Greg Ferro at June 18, 2018 05:03 PM

The Networking Nerd

Conference Impostor Syndrome

In IT we’ve all heard of Impostor Syndrome by now. The feeling that you’re not just a lucky person that has no real skills or is skating by on the seat of their pants is a very real thing. I’ve felt it an many of my friends and fellow members of the community have felt it too. It’s easy to deal with when you have time to think or work on your own. However, when you take your show on the road it can creep up before you know it.

Conferences are a great place to meet people and learn about new ideas. It’s also a place where your ideas will be challenged and put on display. It’s not to difficult to imagine meeting a person for the first time at a place like Cisco Live or VMworld and not feeling little awe-inspired. After all, this could be a person whose works you’ve read for a long time. It could be a person you look up to or someone you would like to have mentor you.

For those in the position of being thrust into the limelight, it can be extremely difficult to push aside those feelings of Impostor Syndrome or even just a general level of anxiety. When people are coming up to you and thanking you for the content you create or even taking it to further extremes, like bowing for example, it can feel like you’re famous and admired for nothing at all.

What the members of the community have to realize is that these feelings are totally natural. You’re well within your rights to want to shy away from attention or be modest. This is doubly true for those of us that are introverts, which seems to happen in higher numbers in IT.

How can you fight these feelings?

Realize You Are Enough. I know it sounds silly to say it but you have to realize that you are enough. You are the person that does what they can every day to make the world a better place in every way you can. It might be something simple like tweeting about a problem you fixed. It may be as impressive as publishing your own network automation book. But you still have to stop and realize you are enough to accomplish your goals.

For those out there that want to tell their heroes and mentors in the community how awesome they are, remember that you’re also forcing them to look at themselves in a critical light sometimes. Some reassurances like, “I love the way you write” or “Your ability to keep the podcast going smoothly” are huge compliments that people appreciate. Because they represent skills that are honed and practiced.

Be The Best You That You Can Be. This one sounds harder than it might actually be. Now that you’ve admitted that you’re enough, you need to keep being the best person that you can be. Maybe that’s continuing to write great content. Maybe it’s something as simple as taking a hour out of your day to learn a new topic or interact with some new people on social media. It’s important that you take your skill set and use it to make things better overall for everyone.

For those out there that are amazed at the amount of content that someone can produce or the high technical quality of what they’re working on, remember that we’re all the same. We all have the same 24 hours in the day to do what we do. So the application of the time spent studying or learning about something is what separates leaders from the pack.

Build Up Others Slowly. This one is maybe the hardest of all. When you’re talking to people and building them up from nothing, you need to be sure to take your time in bringing them along. You can’t just swamp them with knowledge and minute details about their life that have gleaned from reading blogs or LinkedIn. You need, instead, to bring people along slowly and build them up from nothing into the greatest person that you know.

This works in reverse as well. Don’t walk up to someone and start listing off their requirements like a resume. Instead, give them some time to discuss it with you. Let the person you’re talking to dictate a portion of the conversation. Even though you may feel the need to overwhelm with information to justify the discussion you should let them come to their place when they are ready. That prevents the feeling of being overwhelmed and makes the conversation much, much easier.

Tom’s Take

It’s very easy to get lost in the world of feeling inadequate about what others think of you. It goes from adulation and excitement to an overwhelming sense of dread that you’re going to let people down. You have to fix that by realizing that you’re enough and doing the best you can with what you have. If you can say that emphatically about yourself then you are well on the way to ensuring that Conference Imposter Syndrome is something you won’t have to worry about.

by networkingnerd at June 18, 2018 11:53 AM

ipSpace.net Blog (Ivan Pepelnjak)

Vertical Integration Musings

One of my readers asked me a question that came up in his business strategy class:

Why did routers and switches end up being vertically integrated (the same person makes the hardware and the software)? Why didn't they go down the same horizontal path as compute (with Intel making chips, OEMs making systems and Microsoft providing the OS)? Why did this resemble the pre-Intel model of IBM, DEC, Sun…?

Simple answer: because nobody was interested in disaggregating them.

Read more ...

by Ivan Pepelnjak (noreply@blogger.com) at June 18, 2018 06:25 AM

XKCD Comics

June 16, 2018


Ubuntu image for EVE-NG – Python for network engineers

Lately I’ve started working more and more with EVE-NG to test various network scenarios, automation and in general to try and learn something everyday. If you’re familiar with EVE-NG, you know where to find various Linux images which you can download and install . Very helpful indeed, however all of them are coming without any … Continue reading Ubuntu image for EVE-NG – Python for network engineers

[[ This is a content summary only. Visit my website for full links, other content, and more! ]]

by Calin at June 16, 2018 12:20 AM

June 15, 2018

ipSpace.net Blog (Ivan Pepelnjak)

Worth Reading: Discovering Issues with HTTP/2

A while ago I found an interesting analysis of HTTP/2 behavior under adverse network conditions. Not surprisingly:

When there is packet loss on the network, congestion controls at the TCP layer will throttle the HTTP/2 streams that are multiplexed within fewer TCP connections. Additionally, because of TCP retry logic, packet loss affecting a single TCP connection will simultaneously impact several HTTP/2 streams while retries occur. In other words, head-of-line blocking has effectively moved from layer 7 of the network stack down to layer 4.

What exactly did anyone expect? We discovered the same problems running TCP/IP over SSH a long while ago, but then too many people insist on ignoring history and learning from their own experience.

by Ivan Pepelnjak (noreply@blogger.com) at June 15, 2018 06:50 AM

XKCD Comics

June 14, 2018

My Etherealmind

Video: Stop Fussing About Network Cabling – Two Beer Networking


by Greg Ferro at June 14, 2018 02:03 PM


Automation for Reliability

Statistics says, the more often you do something, the higher the chances of a negative event occurring when you do it.

Applying this revelation, if you fly regularly, the chances increase of a delayed flight, or being involved in an incident or accident. A somewhat macabre reference perhaps.

Let’s take something real which happened to me this week (11th June 2018). Whilst working out of one of Juniper’s regional offices, I returned back to the hotel room to carry on working whilst putting my feet up. Something felt strange in the room but I couldn’t put my finger on the weirdness. After a couple of hours, I realised that all of my belongings were gone from the room. Everything! Thanks to a mix-up with the house keeping system, the maids threw my collection of travel items in to some bags ready for disposal. Thanks to a procedure that the hotel operates, for my items to be thrown to the garbage, a manager is required to sign off on the request. A process saved my belongings and I’m thankful that the managers knew this process and also knew where my stuff was likely to be. Before my items were returned, I had already been equipped with emergency toiletries and was settling down for the evening as a knock on the door came. This is a great frame for an automation story.

In the scope of automation, assume a workflow is composed of solid components (i.e. they do one job well, consume structured inputs and return meaningful error codes and structured outputs). These solid components will communicate with what we can assume is an abstract layer or a network device. If we change the device code, we can assume at some point a failure will occur in one or more of these components thanks to interface mismatches and feature creep. Workflows that have been built with “design thinking” methodologies will test for failure scenarios. These failsafe tests are designed to prevent catastrophic failure derived from badly behaved workflows or faulty underlying components. Using the flight example made earlier, it’s true that only so many catastrophic scenarios can be mitigated against. On the other end of the spectrum, a pilot observing flashing warning lights would not attempt to take off. A meteor striking your aircraft like a well-honed pint swilling dart player hitting a bullseye would be something you cannot mitigate against. Using a term that you will get used to very quickly this year, “aim to increase reliability” in both the outcome of your workflow and the processing of it. It’s an NRE (Network Reliability Engineering) tenet. Even if the workflow burns and crashes in the depths of computing hell, it must part of your design process to alert a human of non-recoverable failure. ChatOps is a great candidate for this.

With my hotel experience, the maids were acting autonomously without much thought. They checked their work sheet and it said what it said. No logic that says “Hey, this person’s stuff is here” seemed to be apparent. You could argue here that housekeeping is a well-oiled autonomous process and the hotel’s overarching processes wrapped reliability gains closely to the simple automations of said housekeeping. The outcome was to prevent guest’s items from being accidentally trashed and it worked. They engineered for reliability.

Successful automation is 90/10 in the favour of composition vs tools. Design in the name of reliability eats tooling for breakfast.

The post Automation for Reliability appeared first on ipengineer.net.

by David Gee at June 14, 2018 11:47 AM

June 13, 2018

Dyn Research (Was Renesys Blog)

Introducing the Internet Intelligence Map

Today, we are proud to announce a new website we’re calling the Internet Intelligence Map. This free site will help to democratize Internet analysis by exposing some of our internal capabilities to the general public in a single tool.

For over a decade, the members of Oracle’s Internet Intelligence team (first born as Renesys, more recently as Dyn Research, and now reborn with David Belson, former author of Akamai’s State of the Internet report) have helped to break some of the biggest stories about the Internet.  From the Internet shutdowns of the Arab Spring to the impacts of the latest submarine cable cut, our continuing mission is to help inform the public by reporting on the technical underpinnings of the Internet and its intersection with, and impact on, geopolitics and e-Commerce.

And since major Internet outages (whether intentional or accidental) will be with us for the foreseeable future, we believe offering a self-serve capability for some of the insights we produce is a great way to move towards a healthier and more accountable Internet.

The website has two sections: Country Statistics and Traffic Shifts.  The Country Statistics section reports any potential Internet disruptions seen during the previous 48 hours. Disruption severity is based on three primary measures of Internet connectivity in that country:  BGP routes, traceroutes to responding hosts and DNS queries hitting our servers from that country.

The screenshot below illustrates how recent national Internet blackouts in Syria are depicted in the Internet Intelligence Map.  Notably, while both BGP routes and traceroutes completing into Syria drop to zero during these blackouts, the number of DNS queries surges.  This suggests the outage may be asymmetrical — packets can egress the country but cannot enter.  We believe the spike in queries are due to additional DNS retries as queries go unanswered.  Visualizations such as these will now be widely available to the public.

We can try to further understand the recent blackouts by analyzing Traffic Shifts, pictured below. Visualizations in the Traffic Shifts section may be a little less familiar to some viewers, so additional explanation is provided.  As part of our Internet measurement infrastructure, we run hundreds of millions of traceroutes daily to all parts of the Internet from hundreds of measurement servers distributed around the world.  In the bottom panel, we attempt to model how traffic reaches a target autonomous system (AS) by plotting the number of traceroutes that traverse a penultimate or ‘upstream’ AS as a function of time.  Additionally, in the upper panel, we report the geometric mean of all observed latencies for traceroutes that traversed the target AS.

Below, we see which upstream traceroutes traversed to get to Syrian Telecom. The gaps in the colored stacked plot below correspond to the outages and the colors represent various transit providers for Syrian Telecom that we observe. PCCW (AS3491) appears to be the most commonly traversed AS and perhaps Tata (AS6453) is the least by traceroute volume.

An astute user of the website will notice that CYTA, one of Syrian Telecom’s transit providers also experienced traffic shifts that align with the blackouts (pictured below).  When Syrian Telecom is down, CYTA loses its transit from Cogent.  This is due to the fact that CYTA’s Cogent transit only handles traffic to Syria, something plainly evident in BGP routing.

We call these Traffic Shifts and color them blue on the map because they aren’t necessarily outages or connectivity impairments.  They are simply changes — good, bad or neutral — in how traffic is being routed through the Internet.  On any given day, there are hundreds of such shifts as ISPs change transit providers or re-engineer their networks.  The tool enumerates the top one hundred shifts in the previous 48-hour period and allows our users to explore a macro-level connectivity picture for any given AS.

Take it for a test drive and let us know what you think!

by Doug Madory at June 13, 2018 04:24 PM

ipSpace.net Blog (Ivan Pepelnjak)

What Is Intent-Based Networking?

This blog post was initially sent to the subscribers of my SDN and Network Automation mailing list. Subscribe here.

Whenever someone mentions intent-based networking I try to figure out what exactly they’re talking about. Not surprisingly, I get a different answer every single time. Confused by all that, I tried to find a good definition, but all I could find was vendor marketing along the lines of “Intent-based networking captures and translates business intent so that it can be applied across the network,” or industry press articles regurgitating vendor white papers.

Read more ...

by Ivan Pepelnjak (noreply@blogger.com) at June 13, 2018 06:52 AM

XKCD Comics

June 12, 2018

My Etherealmind
ipSpace.net Blog (Ivan Pepelnjak)

Start with Business Requirements, not Technology

This is the feedback I got from someone who used ExpertExpress to discuss the evolution of their data center:

The session has greatly simplified what had appeared to be a complex and difficult undertaking for us. Great to get fresh ideas on how we could best approach our requirements and with the existing equipment we have. Very much looking forward to putting into practice what we discussed.

And here’s what Nicola Modena (the expert working with the customer) replied:

As I told you, the problem is usually to map the architectures and solutions that are found in books, whitepapers, and validated designs into customer’s own reality, then to divide the architecture into independent functional layers, and most importantly to always start from requirements and not technology.

A really good summary of what ipSpace.net is all about ;) Thank you, Nicola!

by Ivan Pepelnjak (noreply@blogger.com) at June 12, 2018 06:44 AM

June 11, 2018

ipSpace.net Blog (Ivan Pepelnjak)

Avoid Summarization in Leaf-and-Spine Fabrics

I got this design improvement suggestion after publishing When Is BGP No Better than OSPF blog post:

Putting all the leafs in the same ASN and filtering routes sent down to the leafs (sending just a default) are potential enhancements that make BGP a nice option.

Tony Przygienda quickly wrote a one-line rebuttal: “unless links break ;-)

Read more ...

by Ivan Pepelnjak (noreply@blogger.com) at June 11, 2018 06:39 AM

XKCD Comics

June 08, 2018

My Etherealmind

Video: The Network is Nominal – Two Beer Networking

Working or Not Working. There is no such as a good network, just a working network.

by Greg Ferro at June 08, 2018 02:03 PM

ipSpace.net Blog (Ivan Pepelnjak)

Snabb Switch Update on Software Gone Wild

In 2014, we did a series of podcasts on Snabb Switch (Snabb Switch and OpenStack, Deep Dive), a software-only switch delivering 10-20 Gbps of forwarded bandwidth per x86 core. In the meantime, Snabb community slowly expanded, optimized the switching code, built a number of solutions on top of the packet forwarding core, and even forked a just-in-time Lua compiler to get better performance.

To find out the details, listen to Episode 91 of Software Gone Wild in which Luke Gorrie explained how far the Snabb project has progressed in the last four years.

by Ivan Pepelnjak (noreply@blogger.com) at June 08, 2018 06:26 AM

The Networking Nerd

A Wireless Brick In The Wall

I had a very interesting conversation today with some friends about predictive wireless surveys. The question was really more of a confirmation: Do you need to draw your walls in the survey plan when deciding where to put your access points? Now, before you all run screaming to the comments to remind me that “YES YOU DO!!!”, there were some other interesting things that were offered that I wanted to expound upon here.

Don’t Trust, Verify

One of the most important parts of the wall question is material. Rather than just assuming that every wall in the building is made from gypsum or from wood, you need to actually go to the site or have someone go and tell you what the building material is made from. Don’t guess about the construction material.

Why? Because not everyone uses the same framing for buildings. Wood beams may be popular in one type of building, but steel reinforcement is used in other kinds. And you don’t want to base your predictive survey on one only to find out it’s the other.

Likewise, you need to make sure that the wall itself is actually made of what you think it is. Find out what kind of sheetrock they used. Make sure it’s not actually something like stucco plastered over chicken wire. Chicken wire as the structure of a plaster wall is a guaranteed Faraday Cage.

Another fun thing to run across is old buildings. One site survey I did for a wireless bid involved making sure that a couple of buildings on the outer campus were covered as well. When I asked about the buildings and when they were made, I found out they had been built in the 1950s and were constructed like bomb shelters. Thick concrete walls everywhere. Reinforcement all throughout. Once I learned this, the number of APs went up and the client had to get an explanation of why all the previous efforts to cover the buildings with antennas hadn’t worked out so well.

X-Ray Vision

Speaking of which, you also need to make sure to verify the structures underneath the walls. Not just the reinforcement. But the services behind the walls. For example, water pipes go everywhere in a building. They tend to be concentrated in certain areas but they can run the entire length of a floor or across many floors in a high rise.

Why are water pipes bad for wireless? Well, it turns out that the resonant frequency of water is the same as 802.11b/g/n – 2.4GHz. It’s how microwaves operate. And water loves to absorb radiation in that spectral range. Which means water pipes love to absorb wireless signals. So you need to know where they are in the building.

Architectural diagrams are a great way to find out these little details. Don’t just assume that walking through a building and staring at a wall is going to give you every bit of info you need. You need to research plans, blueprints, and diagrams about things. You need to understand how these things are laid out in order to know where to locate access points and how to correct predictive surveys when they do something unexpected.

Lastly, don’t forget to take into account the movement and placement of things. We often wish we could get involved in a predictive survey at the beginning of the project. A greenfield building is a great time to figure out the best place to put APs so we don’t have to go crawling over bookcases. However, you shouldn’t discount the chaos that can occur when an office is furnished or when people start moving things around. Things like plants don’t matter as much as when someone moves the kitchen microwave across the room or decides to install a new microphone system in the conference room without telling anyone.

Tom’s Take

Wireless engineers usually find out when the take the job that it involves being part radio engineer, part networking engineer, part artist, and part construction general contractor. You need to know a little bit about how buildings are made in order to make the invisible network operate optimally. Sure, traditional networking guys have it easy. They can just avoid running cables by florescent lights or interference sources and be good. But wireless engineers need to know if the very material of the wall is going to cause problems for them.

by networkingnerd at June 08, 2018 04:48 AM

XKCD Comics

June 07, 2018


Juniper vQFX10K on ESXi 6.5

A quick and dirty post on running the Juniper vQFX on VMWare ESXi.

You might be wondering why ESXi seeing as we’re all cloudy types. ESXi is purely a case of laziness. Each server in my control has ESXi 6.5 installed. This becomes tin management at the most basic level.

Part of my home network has a DMZ which has several public IP addresses and I expose systems and VNFs externally over the internet. More recently thanks to the IP fabric craze, part of what I’m exploring is easy integration and feature enhancement on Juniper vQFX instances. Two choices exist:

  • Install vQFX on servers with KVM
  • Install on ESXi

I went for the easy ground (because why make it harder than it has to be?) Turns out, it wasn’t as straight forward as it should be, although not difficult. Just a niggle.

Installation Process

Grab yourself the RE and PFE images from the Juniper download site:
https://www.juniper.net/support/downloads/?p=vqfxeval I Grabbed the 18.1 RE and the 17.4 PFE image.

Next, extract the two

files from the
files. You can use the trusty tar tool to extract the files required. Below are two files I downloaded from the Juniper site and renamed for clarity.

Run the

tar -xzf <file>
in each directory. Your outcome will look like below:

├── PFE
│   ├── Vagrantfile
│   ├── box.ovf
│   ├── metadata.json
│   ├── packer-virtualbox-ovf-1520879272-disk001.vmdk
│   └── vqfx-pfe-virtualbox.box
└── RE
    ├── Vagrantfile
    ├── box.ovf
    ├── metadata.json
    ├── packer-virtualbox-ovf-1524541301-disk001.vmdk
    └── vqfx-re-virtualbox.box

One requires step here is to run the

file through the
tool on an EXI machine. Here’s the command I used for that. This tool converts the disk back to being thin provisioned.

vmkfstools -i <input_filename> <output_filename> -d thin

If you do not do this step, the VMs still boot, however I also saw some write errors to disk. I can’t comment any further beyond seeing some error messages due to time investigation time constraints.

Next, to try and figure out what each VM needs and what operating system it runs on, take a sneaky peak inside each of the

files. By looking inside the RE OVF file we know it’s a FreeBSD Operating system with an IDE disk controller. Ok. We can also see what see the requirements for vCPU and RAM.

Here’s the resulting data needed to build those VMs.

    CPU:        1
    RAM:        1024 MB
    OS:         FreeBSD
    DiskCtl:    IDE, Controller0 Master
    NICs:       Management, Internal
    Type:       VMvware 5.5 (type 11)
    CPU:        1
    RAM:        2048
    OS:         Ubuntu x64
    Diskctl:    IDE, Controller0 Master
    NICs:       Management, Internal, Revenue(n)...
    Type:       VMvware 5.5 (type 11)

Now you can use the WebUI or vmtools CLI and build virtual machines. I had issues running these VMs as version 6.5, but I can confirm they work just fine as 5.5!

Here’s another showing how to set up the disk controller, CPU and RAM.

One final set of warnings. When launching the VMs, sometimes the NIC order changes. Ensure the ordering of NICs:

  1. Management NIC
  2. Internal NIC (on own vSwitch with 9000 byte MTU)
  3. For PFE port three onwards are revenue ports

Also, the availability of the PFE for 18.1 isn’t there yet, so I used the 17.4 PFE. It is normal to have semantic version mismatches between the RE and PFE virtual machines, so don’t worry!

Finally finally, this is a community supported project, so please do not moan at Juniper or your account manager!


The post Juniper vQFX10K on ESXi 6.5 appeared first on ipengineer.net.

by David Gee at June 07, 2018 03:50 PM

June 06, 2018

Dyn Research (Was Renesys Blog)

IPv6 Adoption Still Lags In Federal Agencies

On September 28, 2010, Vivek Kundra, Federal CIO at the time, issued a “Transition to IPv6” memorandum noting that “The Federal government is committed to the operational deployment and use of Internet Protocol version 6 (IPv6).” The memo described specific steps for agencies to take to “expedite the operational deployment and use of IPv6”, and laid out target deadlines for key milestones. Of specific note, it noted that agencies shall “Upgrade public/external facing servers and services (e.g. web, email, DNS, ISP services, etc) to operationally use native IPv6 by the end of FY 2012.”

For this sixth “launchiversary” of the World IPv6 Launch event, we used historical Internet Intelligence data collected from Oracle Dyn’s Internet Guide recursive DNS service to examine IPv6 adoption trends across federal agencies both ahead of the end of FY 2012 (September 2012) deadline, as well as after it.


The data set used for this analysis is similar to the one used for the recent “Tracking CDN Usage Through Historical DNS Data” blog post, but in this case, it only includes .gov hostnames. While the memorandum calls out the use of IPv6 for ‘web, email, DNS, ISP services, etc.’, in order to simplify the analysis, this post only focuses on hostnames of the form www.[agency].gov, essentially limiting it to public Web properties.  Furthermore, the GSA’s master list of .gov domains was used to identify federal agencies for the analysis. Although they may have been present in the initial data set, .gov hostnames associated with cities, counties, interstate agencies, native sovereign nations, and state/local governments were not included in the analysis.

The analysis was done on historical recursive DNS data from September 2009 through October 2017, encompassing federal fiscal years 2010-2017. The graphs below are aggregated by month, and reflect the first time that a given hostname was associated with a AAAA DNS resource record within our data set – note that this may differ from the date that the hostname was first available over IPv6. In addition, the data set used for this analysis is not necessarily exhaustive across .gov domains, as it reflects only those hostname requests made to the Internet Guide service.

Summary Of Findings

In short, Internet Intelligence data showed that IPv6 adoption across federal government www sites was less than aggressive across the survey period, with many agencies failing to deploy public Web sites on IPv6 by the end of FY 2017.  Ahead of the deadline, IPv6 adoption was generally slow through 2009-2011, although activity did begin to increase in December 2011, continuing through the September 2012 deadline. Adoption continued at a solid rate throughout FY 2013, but remained generally low through the end of the survey period, with some periods of increased activity in 2017. Among the sites identified, most remain available in a dual-stack (IPv4 & IPv6) setup, but some have fallen back to IPv4 only, and others are no longer available. Akamai and Amazon Web Services are the CDN and cloud platform providers of choice for sites delivered from third-party service providers.

Detailed Analysis

The Executive Branch has the largest number of agencies listed in the GSA master list referenced above. As shown in the figure below, there were a significant number for which we did not find www sites on IPv6 during the survey period. Five agencies deployed sites on IPv6 only ahead of the deadline, and 20 deployed sites only after the deadline, while 28 agencies showed activity both before and after the deadline.  Of the eleven listed agencies in the Legislative branch, four deployed www sites on IPv6 only after the deadline, while no IPv6 sites were found for the remaining seven. The two agencies in the Judicial branch were split, with one integrating IPv6 after the deadline, and no IPv6 Web sites found for the other.

IPv6 Adoption for Federal Agencies Before & After Deadline

Examining that data in more detail shows some interesting activity and trends. In the figure below, the first big spike of activity is seen in June 2011, with AAAA record first seen dates for .gov www sites clustered around World IPv6 Day, which took place on June 8. (Click the graph to view a larger version of the figure.) The Departments of Commerce, Energy, and Health & Human Services launched the largest numbers of Web sites on IPv6 during that month. However, activity all but disappeared until December, when the Department of Veterans Affairs began a multi-month effort to make several hundred topical and city-specific Web sites available via IPv6.  Following the VA’s lead, a number of other agencies deployed Web sites on IPv6 through the first half of calendar year 2012, with a peak of activity around the initial World IPv6 Launch event in June. However, it is clear that a number of agencies scrambled to meet the end of FY 2012 deadline, with 115 Web sites from over 20 agencies first appearing on IPv6 in September.

IPv6 adoption by month ahead of the FY 2012 deadline

IPv6 adoption tailed off in the months following the September 2012 deadline, as illustrated in the figure below. (Click the graph to view a larger version of the figure.) Starting in June 2013, the Department of Commerce began turning up dozens of topical NOAA sites on IPv6, with the initiative lasting about a year. Beyond that, AAAA records were first seen for only 20-30 new federal Web sites per month through early 2017. Interestingly, the yearly World IPv6 Launch anniversaries during that period seemed to have little impact – no meaningful increases were observed around those dates. However, a significant spike was seen in June 2017, with 120 sites from 18 agencies first observed on IPv6. The Departments of Commerce, Energy, Health & Human Services, and the Interior were the most active agencies that month.

IPv6 adoption after the FY 2012 deadline

Current Status

The figures above illustrate the deployment of federal agency Web sites on IPv6 over an eight-year period that ended in October 2017. We also examined the current state of the 2,255 sites identified over that timeframe – that is, how many remain available over IPv6? As shown in the figure below, the news here is relatively good, with over 1,600 available as dual-stacked sites, reachable on IPv6 and IPv4. Interestingly, three sites (www.ipv6.noaa.gov, www.maryland.research.va.gov, and www.e-qip.gov) are available only over IPv6, with DNS lookups returning only AAAA records. Unfortunately, over 200 of the identified sites have fallen back to being available only over IPv4, while over 360 of them are no longer reachable, responding to DNS lookups with an NXDOMAIN.

Current disposition of identified IPv6 Web sites

Many federal agencies work with cloud and CDN providers as part of IT modernization efforts, or to improve the performance, reliability, and security of their Web presence. Some of the identified sites included CNAMEs within their DNS records. For those sites, we analyzed the CNAMEs to identify the use of popular cloud and CDN providers, with the results shown in the figure below. For those sites accelerated through a CDN, over 300 of them make use of Akamai’s IPv6-enabled services, while a smaller number are delivered over IPv6 via Amazon’s Cloudfront service, Cloudflare, and Limelight. Of those sites served directly from an IPv6-enabled cloud platform, the largest number came from Amazon Web Services, while the remainder came from Google Hosted Sites, IBM Cloud, and a small number of other providers.

CDN & Cloud usage for selected identified IPv6 Web sites


A recent FedTech article noted that “Agency adoption of IPv6 moves at a glacial pace” but also that “Most have started to ensure their public websites are accessible via IPv6 using dual-stack environments”. Our analysis of eight years of historical recursive DNS data supports these assertions – while much progress has been made, there is still a long way to go.

During the six years since the initial World IPv6 Launch event, many cloud and CDN providers have moved to ease the transition to IPv6, making it easy for customers to support it, either enabling it by default when a new site is configured on their platform, or via a simple configuration option. While federal agencies have been directed to support IPv6 throughout their technology stack, it is arguably easier than ever to do so for public-facing Web sites and applications.

by David Belson at June 06, 2018 12:19 PM

ipSpace.net Blog (Ivan Pepelnjak)

Automation Win: Document Cisco ACI Configuration

This blog post was initially sent to the subscribers of my SDN and Network Automation mailing list. Subscribe here.

A while ago I complained how the GUI- or API-based orchestration (or intent-based) systems make it hard to figure out what exactly has been configured because they can’t give you a single text configuration file that you could track with version-control software.

Dirk Feldhaus found the situation so ridiculous that he decided to create an Ansible playbook that collects and dumps tenant parameters configured on a Cisco ACI tenant as a homework assignment in the Building Network Automation Solutions online course. As he explained the problem:

Read more ...

by Ivan Pepelnjak (noreply@blogger.com) at June 06, 2018 06:44 AM

XKCD Comics

June 05, 2018

My Etherealmind
ipSpace.net Blog (Ivan Pepelnjak)

Integrating 3rd Party Firewalls with Amazon Web Services (AWS) VPC Networking

After figuring out how packet forwarding really works within AWS VPC (here’s an overview, the slide deck is already available to ipSpace.net subscribers) the next obvious question should be: “and how do I integrate a network services device like a next-generation firewall I have to use because $securityPolicy into that environment?

Please don’t get me started on whether that makes sense, that’s a different discussion.

Christer Swartz, an old-time CCIE and occasional guest on Software Gone Wild podcast will show you how to do it with a Palo Alto firewall during my Amazon Web Services Networking Deep Dive workshop on June 13th in Zurich, Switzerland (register here).

by Ivan Pepelnjak (noreply@blogger.com) at June 05, 2018 07:39 AM

Aaron's Worthless Words

An Update for my Adoring Fans

I feel like a teenage girl with a fashion blog who hasn’t posted in 6 months and comes back with “I know I haven’t posted in a while…”  Sigh.  It’s been right at a year since I actually published a post, so I figured I would give everyone an update.

I’ve had some personal things going on lately, and those have taken all of my energy.  We’ve made it through those rough times, so my energy is coming back.  I’m feeling better every day, and I hope I can get back to producing some content.  And, let me tell you…I’ve got some stuff to talk about.

*insert star wipe here*

We got a new director-level dude at the office, and he’s really mixing things up for us.  His philosophy includes changing the way we do everything that we do.  Like literally everything.  He ran a report for me on my ticket queue and showed me that 60% of my ticket count was on stupid stuff that’s below my pay grade.  His advice : Make somebody else do it.  So I did.  I taught myself some more Python (not hard since I’ve done coding stuff throughout my career and college), learned some Ansible, wrote a couple modules, put some playbooks in Rundeck, and taught our support center how to use them.  Now I don’t have to worry about that noise any more.  Inspired by my resounding success, I looked at other tasks I was doing.  Firewall rule updates, simple routing changes, packet capture setups…now all done through some sort of automation tool.  The whole team is involved, too.  On a regular basis, my teammates and I get calls from other groups asking for help automating their own tasks.  Proud moments for us.  More on that later.

We’re not only changing the way we do things day-to-day, but we’re also thinking about longer-term architectural changes.  We’re bringing in whitebox switches where we can to save capital while still providing functionality and stability. In our first journey down the whitebox road, we went with Cumulus Networks running on top of EdgeCore switches to form a pretty little EVPN fabric in one of our smaller data centers.  I’ll write more on this later, but it was a great experience doing something other than the normal IOS and NX-OS configurations.  If we put our new-found enthusiasm for automation together with this Debian-based OS on the switches, we get a nifty little system that’s functional, scalable, stable, flexible, manageable…I’m running out of Ls here.  I’m really super-excited to be part of this.

One thing that hasn’t changed, though, is attending Cisco Live.  I will be there next week with bells on.  This year, I’m going on the Imagine pass, so let’s see what happens.  As usual, my schedule will be pretty booked.  The Saturday Adventure is at Kennedy Space Center.  I’ll be at Tech Field Day Extra on Monday and Wednesday.  World of Solutions is on the schedule somewhere.  There are parties.  There are tweetups.  There are get-togethers.  A full week of building the real social network (you know…like eyeball contact) and catching up with both people and technology.  All make for a great experience that I wouldn’t miss for anything.  I’m sure I’ll see a lot of you next week in Orlando.

Send any new blog tags questions to me.

by Aaron Conaway at June 05, 2018 02:30 AM

June 04, 2018

My Etherealmind

Response: Microsoft buys GitHub. Its Official.

Money can buy you cool friends and build your platform.

by Greg Ferro at June 04, 2018 03:29 PM

ipSpace.net Blog (Ivan Pepelnjak)

Is EBGP Really Better than OSPF in Leaf-and-Spine Fabrics?

Using EBGP instead of an IGP (OSPF or IS-IS) in leaf-and-spine data center fabrics is becoming a best practice (read: thing to do when you have no clue what you’re doing).

The usual argument defending this design choice is “BGP scales better than OSPF or IS-IS”. That’s usually true (see also: Internet), and so far, EBGP is the only reasonable choice in very large leaf-and-spine fabrics… but does it really scale better than a link-state IGP in smaller fabrics?

Read more ...

by Ivan Pepelnjak (noreply@blogger.com) at June 04, 2018 06:35 AM

XKCD Comics

June 01, 2018

The Networking Nerd

Avoiding A MacGyvered Network

Ivan Pepelnjak has an interesting post up today about MacGyver-ing in the network. He and Simon Milhomme are right that most small-to-medium sized networks are pretty much non-reference architectures and really, really difficult to manage and maintain properly on the best of days. On the worst of days, they’re a nightmare that make you want to run screaming into the night. But why?

One Size Never Fits All

Part of the issue is that reference architectures and cookie-cutter designs aren’t made for SMEs. Sure, the large enterprise and cloud providers have their own special snowflakes. But so too do small IT shops that have been handed a pile of parts and told to make it work.

People like Greg Ferro and Peyton Maynard-Koran believe this is due to vendors and VARs pushing hardware and sales cycles like crazy. I have attributed it to the lack of real training and knowledge about networking. But, it also has a lot to do with the way that people see IT as a cost center. We don’t provide value like marketing. We don’t collect checks like accounting. At best, we’re no different than the utility companies. We’re here because we have to be.

Likewise, when IT is forced into making decisions based on some kind of rebate or sale we tend to get stuck with the blame when things don’t work correctly. People that wouldn’t bat an eye at buying a new Rolex or BMW get downright frugal when they’re deciding which switches to buy to run their operation for the next 10 years. So, they compromise. Surely you all can make this switch work? It only has half the throughput as the one you originally specced but it was on sale!

So, stuck with bad decisions and no real documentation or reference points, we IT wizards do what we can with what we have. Just like Angus MacGyver. Sure, those networks are a pain to manage. They’re a huge issue when it’s time to upgrade to the next discount switch. And given the number of fires that we have to fight on a daily basis we never get a chance to go back and fix when we nailed up in the first place. Nothing is more permanent than a temporary fix. And no install lasts longer than one that was prefaced with “let me just get this working for the next week”.

Bespoke Off The Rack

So, how do we fix the MacGyver-ing? It’s not going to be easy. It’s going to be painful and require us to step out of our comfort zones.

  1. Shut Up And Do As I Say. This one is hard. Instead of having management breathing down your neck to get something installed or working on their schedule, you need to push back. You need to tell people that rushing is only going to complicate things. You need to fire back when someone hands you equipment that doesn’t meet your spec. You need to paraphrase the famous Shigeru Miyamoto – “A delayed network is eventually good, but a rushed installation is bad forever.”
  2. Document Like It Will Be Read Aloud In Court. Documentation isn’t just a suggestion. It’s a necessity. We’ve heard about the “hit by a bus” test. It’s more than just that, though. You need to not only be able to replace yourself but also to be able to have references for why you made the decisions you did. MacGyver-ing happens when we can’t remember why we made the decisions we did. It happens when we need to come up with a last minute solution to an impossible problem created by other people (NSFW language). Better to have that solution documented in full so you know what’s going on three years from now.
  3. Plan Like It’s Happening Tomorrow. Want to be ready for a refresh? Plan it today. Come back to your plan. Have a replacement strategy for every piece of equipment in your organization. Refresh the plan every few months. When someone comes to you with a new idea or a new thing, write up a plan. Yes, it’s going to be a lot of spinning your wheels. but it’s also going to be a better solution when someone comes to you and says that it’s time to make a project happen. Then, all you need to do is pull out your plan and make it happen. Imagine it like the opposite of a DR plan – Instead of the worst case scenario this is your chance to make it the best case.

Tom’s Take

There’s a reason the ringtone on my phone has been the theme from MacGyver for the last 15 years. I’m good at building something out of nothing. Making things work when they shouldn’t. And, as bad as it sounds, sometimes that’s what’s needed. Especially in the VAR world. But it shouldn’t be standard operating procedure. Instead, make your plan, execute it when the time comes, and make sure no one pushes you into a situation where MacGyver-ing is the last option you have.

by networkingnerd at June 01, 2018 07:29 PM

ipSpace.net Blog (Ivan Pepelnjak)
XKCD Comics

May 31, 2018

ipSpace.net Blog (Ivan Pepelnjak)

Amazon Web Services Networking Overview

Traditional networking engineers, or virtualization engineers familiar with vSphere or VMware NSX, often feel like Alice in Wonderland when entering the world of Amazon Web Services. Everything looks and sounds familiar, and yet it all feels a bit different

I decided to create a half-day workshop (first delivery: June 13th in Zurich, Switzerland) to make it easier to grasp the fundamentals of AWS networking, and will publish high-level summaries as a series of blog posts. Let’s start with an overview of what’s different:

Read more ...

by Ivan Pepelnjak (noreply@blogger.com) at May 31, 2018 05:08 AM

May 30, 2018

ipSpace.net Blog (Ivan Pepelnjak)

Typical EVPN BGP Routing Designs

As discussed in a previous blog post, IETF designed EVPN to be next-generation BGP-based VPN technology providing scalable layer-2 and layer-3 VPN functionality. EVPN was initially designed to be used with MPLS data plane and was later extended to use numerous data plane encapsulations, VXLAN being the most common one.

Design Requirements

Like any other BGP-based solution, EVPN uses BGP to transport endpoint reachability information (customer MAC and IP addresses and prefixes, flooding trees, and multi-attached segments), and relies on an underlying routing protocol to provide BGP next-hop reachability information.

Read more ...

by Ivan Pepelnjak (noreply@blogger.com) at May 30, 2018 07:06 AM

Network Automation with Nornir (formerly Brigade) on Software Gone Wild

David Barroso was sick-and-tired of using ZX Spectrum of Network Automation and decided to create an alternative with similar functionality but a proper programming language instead of YAML dictionaries masquerading as one. The result: Nornir, an interesting network automation tool formerly known as Brigade we discussed in Episode 90 of Software Gone Wild.

2018-05-30: Brigade was renamed to Nornir. As David wrote me: “just right after the podcast was published we got contacted by Microsoft engineers as they already had a quite popular Kubernetes automation tool called brigade as well so we decided to change the name of the project.


by Ivan Pepelnjak (noreply@blogger.com) at May 30, 2018 06:15 AM

XKCD Comics

May 29, 2018

My Etherealmind

Shipping Faulty, Expensive and Incomplete products as Digital Transformation

Silicon Valley has rules about shipping faulty, incomplete products for the first version. If you are not embarrassed by the first version of your product, you’ve launched too late. – Reid Hoffman Many companies, including Enterprise IT vendors, take this as a way to charge high prices while they finish the product. Executives readily convince […]

by Greg Ferro at May 29, 2018 06:15 PM

ipSpace.net Blog (Ivan Pepelnjak)

Upcoming Webinars: June 2018 and Beyond

Wow. Where did the spring 2018 go? It’s almost June… and time for a refreshed list of upcoming webinars:

Read more ...

by Ivan Pepelnjak (noreply@blogger.com) at May 29, 2018 07:36 AM

May 28, 2018

My Etherealmind
ipSpace.net Blog (Ivan Pepelnjak)

Happy Eyeballs v2 (and how I Was Wrong Again)

In Moving Complexity to Application Layer I discussed the idea of trying to use all addresses returned in a DNS response when trying to establish a connection with a server, concluding with “I don’t think anyone big enough to influence browser vendors is interested in reinventing this particular wheel.

I’m really glad to report I was wrong ;) This is what RFC 8305 (Happy Eyeballs v2) says:

Read more ...

by Ivan Pepelnjak (noreply@blogger.com) at May 28, 2018 06:34 AM

XKCD Comics

May 26, 2018

ipSpace.net Blog (Ivan Pepelnjak)

Fun: Playing Battleships over BGP

BGP is the kitchen-sink of networking protocols, right? Whatever control-plane information you need to transport around, you can do it with BGP… including the battleship coordinates carried in BGP communities.

On the more serious front, it's nice to see at least some ISPs still care enough about the stability of the global Internet to use BGP route flap dampening.

by Ivan Pepelnjak (noreply@blogger.com) at May 26, 2018 01:00 PM

May 25, 2018

ipSpace.net Blog (Ivan Pepelnjak)

Video: SPB Fabric Use Cases

As part of his “how does Avaya implement data center fabrics” presentation, Roger Lapuh talked about use cases for SPB in data center fabrics.

I have no idea what Extreme decided to do with the numerous data center fabric solutions they bought in the last few years, so the video might have just a historic value at this point… but it’s still nice to see what you can do with smart engineering.

by Ivan Pepelnjak (noreply@blogger.com) at May 25, 2018 06:52 AM