December 09, 2016 Blog (Ivan Pepelnjak)

Snabb Switch with vMX Control Plane on Software Gone Wild

In Software Gone Wild Episode 52 Katerina Barone-Adesi explained how Igalia implemented 4-over-6 tunnel termination (lwAFTR) with Snabb Switch. Their solution focused on very fast data plane and had no real control plane.

No problem – there are plenty of stable control planes on the market, all we need is some glue.

Read more ...

by Ivan Pepelnjak ( at December 09, 2016 06:52 AM

XKCD Comics

December 08, 2016

My Etherealmind Blog (Ivan Pepelnjak)
Potaroo blog

Scoring the DNS Root Server System, Pt2 - A Sixth Star?

In November I wrote about some simple tests that I had undertaken on the DNS Root nameservers. The tests looked at the way the various servers responded when they presented a UDP DNS response that was larger than 1,280 octets. I awarded each of the name servers up to five stars depending on how that managed to serve such large responses in IPv4 and IPv6. I'd like to return to this topic by looking at one further aspect of DNS server behaviour, namely the way in which servers handle large UDP responses over IPv6.

December 08, 2016 12:00 AM

December 07, 2016


Juniper Ambassador

I am delighted to announce that earlier this week I was accepted into Juniper’s Ambassador program. To say that I am completely honored is an understatement. Working with Juniper’s products and technologies has been a labor of love for me dating back almost 18 years, since my first introduction to Junos back in early 1999 — … Continue reading Juniper Ambassador

by Stefan Fouant at December 07, 2016 02:41 PM

Networker's Online

L3 fabric DC -The underlay Network -Part1

In the previous posts we have discussed the classic DC designs and the M-LAG solution. In this post we will cover the basic L3 fabric DC, you might never heard of it or you think that’s a solution for massive scale DC, yes the massive DC in the world would be running L3 fabric but nowadays more and more customers are moving …

The post L3 fabric DC -The underlay Network -Part1 appeared first on

by Ayman AboRabh at December 07, 2016 08:54 AM Blog (Ivan Pepelnjak)

Q&A: Building a Layer-2 Data Center Fabric in 2016

One of my readers designing a new data center fabric that has to provide L2 transport across the data center sent me this observation:

While we don’t have plans to seek an open solution in our DC we are considering ACI or VXLAN with EVPN. Our systems integrator partner expressed a view that VXLAN is still very new. Would you share that view?

Assuming he wants to stay with Cisco, what are the other options?

Read more ...

by Ivan Pepelnjak ( at December 07, 2016 08:07 AM

XKCD Comics

December 06, 2016

Honest Networker Blog (Ivan Pepelnjak)

Response: On the Death of OpenFlow

On November 7th SDx Central published an article saying “OpenFlow is virtually dead.” There’s a first time for everything, and it’s a real fun reading a marketing blurb on a site sponsored by SDN vendors claiming the shiny SDN parade unicorn is dead.

On a more serious note, Tom Hollingsworth wrote a blog post in which he effectively said “OpenFlow is just a tool. Can we please find the right problem for it?

Read more ...

by Ivan Pepelnjak ( at December 06, 2016 08:01 AM

The Networking Nerd

HPE Networking: Past, Present, and Future


I had the chance to attend HPE Discover last week by invitation from their influencer team. I wanted to see how HPE Networking had been getting along since the acquisition of Aruba Networks last year. There have been some moves and changes, including a new partnership with Arista Networks announced in September. What follows is my analysis of HPE’s Networking portfolio after HPE Discover London and where they are headed in the future.

Campus and Data Center Divisions

Recently, HPE reorganized their networking division along two different lines. The first is the Aruba brand that contains all the wireless assets along with the campus networking portfolio. This is where the campus belongs. The edge of the network is an ever-changing area where connectivity is king. Reallocating the campus assets to the capable Aruba team means that they will do the most good there.

The rest of the data center networking assets were loaded into the Data Center Infrastructure Group (DCIG). This group is headed up by Dominick Wilde and contains things like FlexFabric and Altoline. The partnership with Arista rounds out the rest of the switch portfolio. This helps HPE position their offerings across a wide range of potential clients, from existing data center infrastructure to newer cloud-ready shops focusing on DevOps and rapid application development.

After hearing Dom Wilde speak to us about the networking portfolio goals, I think I can see where HPE is headed going forward.

The Past: HPE FlexFabric

As Dom Wilde said during our session, “I have a market for FlexFabric and can sell it for the next ten years.” FlexFabric represents the traditional data center networking. There is a huge market for existing infrastructure for customers that have made a huge investment in HPE in the past. Dom is absolutely right when he says the market for FlexFabric isn’t going to shrink the foreseeable future. Even though the migration to the cloud is underway, there are a significant number of existing applications that will never be cloud ready.

FlexFabric represents the market segment that will persist on existing solutions until a rewrite of critical applications can be undertaken to get them moved to the cloud. Think of FlexFabric as the vaunted buggy whip manufacturer. They may be the last one left, but for the people that need their products they are the only option in town. DCIG may have eyes on the future, but that plan will be financed by FlexFabric.

The Present: HPE Altoline

Altoline is where HPE was pouring their research for the past year. Altoline is a product line that benefits from the latest in software defined and webscale technologies. It is technology that utilizes OpenSwitch as the operating system. HPE initially developed OpenSwitch as an open, vendor-neutral platform before turning it over to the Linux Foundation this summer to run with development from a variety of different partners.

Dom brought up a couple of great use cases for Altoline during our discussion that struck me as brilliant. One of them was using it as an out-of-band monitoring solution. These switches don’t need to be big or redundant. They need to have ports and a management interface. They don’t need complexity. They need simplicity. That’s where Altoline comes into play. It’s never going to be as complex as FlexFabric or as programmable as Arista. But it doesn’t have to be. In a workshop full of table saw and drill presses, Altoline is a basic screwdriver. It’s a tool you can count on to get the easy jobs done in a pinch.

The Future: Arista

The Arista partnership, according to Dom Wilde, is all about getting ready for the cloud. For those customers that are looking at moving workloads to the cloud or creating a hybrid environment, Arista is the perfect choice. All of Arista’s recent solution sets have been focused on providing high-speed, programmable networking that can integrate a number of development tools. EOS is the most extensible operating system on the market and is a favorite for developers. Positioning Arista at the top of the food chain is a great play for customers that don’t have a huge investment in cloud-ready networking right now.

The question that I keep coming back to is…when does this Arista partnership become an acquisition? There is a significant integration between the two companies. Arista has essentially displaced the top of the line for HPE. How long will it take for Arista to make the partnership more permanent? I can easily foresee HPE making a play for the potential revenues produced by Arista and the help they provide moving things to the cloud.

Tom’s Take

I was the only networking person at HPE Discover this year because the HPE networking story has been simplified quite a bit. On the one hand, you have the campus tied up with Aruba. They have their own story to tell in a different area early next year. On the other hand, you have the simplification of the portfolio with DCIG and the inclusion of the Arista partnership. I think that Altoline is going to find a niche for specific use cases but will never really take off as a separate platform. FlexFabric is in maintenance mode as far as development is concerned. It may get faster, but it isn’t likely to get smarter. Not that it really needs to. FlexFabric will support legacy architecture. The real path forward is Arista and all the flexibility it represents. The question is whether HPE will try to make Arista a business unit before Arista takes off and becomes too expensive to buy.


I was an invited guest of HPE for HPE Discover London. They paid for my travel and lodging costs as well as covering event transportation and meals. They did not ask for nor were they promised any kind of consideration in the coverage provided here. The opinions and analysis contained in this article represent my thoughts alone.

by networkingnerd at December 06, 2016 05:44 AM

December 05, 2016

Ethan Banks on Technology

Get Out While You Still Can

For years, this blog has mostly been about enterprise IT with a focus on networking. I’ll spare you the entire history because no one cares. But in short, if you dig through the archives, you’ll find content going all the way back to the beginning of 2007 when I was writing for my CCIE study blog.

Ten years, hundreds of articles, and millions of words later, I am a full-time writer and podcaster covering enterprise technology for engineers from behind a microphone and keyboard. But I don’t do that here anymore. I do that at

Before Packet Pushers became the thing that put food in my mouth, I’d split my enterprise tech writing between this blog and that, but splitting the content just doesn’t make sense now. Thus, I’ve been putting all my enterprise tech writing under the Packet Pushers flag. Packet Pushers Interactive is my company that I co-founded, and I’m proud of it. There is no reason to straddle the fence.

So, what of this blog? will be where I write about…

  • General technology. For example, I’m into the Garmin & Apple ecosystems. I read a lot about alt-energy. I cover many other nerdy topics with my friend Eric Sutphen on the weekly Citizens of Tech podcast (not a Packet Pushers show, just a side project). I like cars, particularly Subarus. I’m into science. Body hacking through fitness and nutrition is interesting to me, too. Data, data, data. If there’s actual data behind it, I might write about it.
  • Fiction. I have a lot of nerd-oriented fiction ideas, and this blog is a good place to try them out. You know, fake stories. Like what you get on most cable news channels, only I won’t pretend the fictional stories are real.
  • The business of new media. I have opinions based on experience on how to make new media work. I believe I can address both content creators and marketers delivering messages to wise consumers who reject spammy content. (You won’t believe what happened next!)
  • Other stuff. I’m not limiting myself.

This blog change has been coming for a while. Depending on how you consume, you might have noticed a new theme a few months ago. I’ve stripped it right down to the bare essentials.

  • No ads.
  • No comments.
  • No multi-column format with circular Web 2.0 icons and waterfalls of articles & graphics that dim the power when the page finally loads.
  • No menu bars showing you a bunch of options you don’t care about.

Just the text, plus a single icon in the upper left containing the one menu on the whole site. If you want to search or navigate to older content, click the icon.

The whole idea of the new theme is to get in, load the article quickly, read, and get out. Or read the entire article via e-mail. Or RSS. Your choice. No more feeds with only excerpts to drive page view statistics or banner ad impressions.

Get out while you still can.

You’re on notice. Now is your chance to get out while you still can. You can unsubscribe from the e-mail delivery service. You can disconnect the RSS feed. It’s okay. I won’t be upset. We can still be friends. I’ll see you over at

But if you choose to stay, I’ll do my best to keep it interesting.

by Ethan Banks at December 05, 2016 08:20 PM

My Etherealmind

AWS Shield – Managed DDoS Protection

AWS now offers a DDOS service. Some non-specific thinking out loud on what this means.

The post AWS Shield – Managed DDoS Protection appeared first on EtherealMind.

by Greg Ferro at December 05, 2016 05:06 PM

Response: Snap 1.0 is Here

Intel Snap reaches 1.0 milestone. This is a valuable troubleshooting tool for NFV.

The post Response: Snap 1.0 is Here appeared first on EtherealMind.

by Greg Ferro at December 05, 2016 12:32 PM Blog (Ivan Pepelnjak)
XKCD Comics
Jason Edelman's Blog

Automating Cisco Nexus Switches with Ansible

For the past several years, the open source [network] community has been rallying around Ansible as a platform for network automation. Just over a year ago, Ansible recognized the importance of embracing the network community and since then, has made significant additions to offer network automation out of the box. In this post, we’ll look at two distinct models you can use when automating network devices with Ansible, specifically focusing on Cisco Nexus switches. I’ll refer to these models as CLI-Driven and Abstraction-Driven Automation.

Note: We’ll see in later posts how we can use these models and a third model to accomplish intent-driven automation as well.

For this post, we’ve chosen to highlight Nexus as there are more Nexus Ansible modules than any other network operating system as of Ansible 2.2 making it extremely easy to highlight these two models.

CLI-Driven Automation

The first way to manage network devices with Ansible is to use the Ansible modules that are supported by a diverse number of operating systems including NX-OS, EOS, Junos, IOS, IOS-XR, and many more. These modules can be considered the lowest common denominator as they work the same way across operating systems requiring you to define the commands that you want to send to network devices.

We’ll look at an example of this model managing VLANs on Nexus switches.

The first thing we are going to do is define an appropriate data model for VLANs. The more complex the feature (and if you want to consider multiple vendors), the more complex and advanced a data model can be. In this case, we’ll simply create a list of key-value pairs and represent each VLAN by having a VLAN ID and VLAN NAME.

The data model we are going to use for VLANs is the following:


  - id: 10
    name: web_servers
  - id: 20
    name: app_servers
  - id: 30
    name: db_servers

Once we have our data (variables), we’ll use a Jinja2 template that’ll create the required configurations that we’ll ultimately deploy to each device.

Below is the Jinaj2 template we’ll use to create our configurations. In our template, we are requiring a VLAN name be present for each VLAN.

{% for vlan in vlans %}
vlan {{ }}
  name {{ }}
{% endfor %}

Once the configuration commands are built from the template, we need to deploy (ensure the required commands exist) them to the device. This is where we can use the nxos_config Ansible module.

The *_config modules exist for a large number of network operating systems, which is why we’re calling them the lowest common denominator.

We can create the required configurations and do the deployment with two Ansible tasks:

    - name: BUILD CONFIGS
        src: vlans.j2
        dest: configs/vlans.cfg

        src: configs/vlans.cfg
        provider: "{{ nxos_provider }}"

You could use the commands parameter in the nxos_config task rather than using a separate template task; however, it’s a little cleaner using templates. Yes, this is subjective.

You could also eliminate the task that uses the template module all together and just do source: vlans.j2, but adding the secondary step offers the flexibility of validating the build step (command generation) before a deployment.

In this example, we are going to deploy the same VLANs to all devices.

If we run a playbook that has these tasks, it’ll ensure all VLANs in the variables file get deployed on the Nexus switches.

But, what if you need to subsequently remove VLANs?

In this current model, it’s up to you to either create a second template that uses no vlan <id> or make a more complex template and based on some other variable, render the commands to either configure or un-configure VLANs.

Abstraction-Driven Automation

Using the nxos_config module is simply one way you can manage NX-OS devices with Ansible. While it offers flexibility, you still need to develop templates or commands for everything you want to configure or un-configure on a given device. For some tasks, this is quite easy; for others, it could get tedious.

Another approach is to use Ansible modules that offer abstractions, eliminate the need to use commands or templates, while also making it quite easy to ensure a given configuration does NOT exist.

We aren’t talking about high level abstractions here such as services or tenants, but still network-centric objects and abstractions such as VLANs without the need for you to define the commands that are needed for configuring a given resource or object.

You can in fact stitch together multiple network-centric objects to create a higher level abstraction such as tenants quite easily with Ansible.

Let’s take a look at the same VLANs example from above, but instead of using nxos_config, we are going to use nxos_vlan.

This module only manages VLANs globally on Nexus switches.

Using this model, the single task we need to ensure the VLANs exist on each switch is the following:

        vlan_id: "{{ }}"
        name: "{{ }}"
        state: present
        provider: "{{ nxos_provider }}"
      with_items: "{{ vlans }}"

And if we need to remove the same VLANs:

        vlan_id: "{{ }}"
        name: "{{ }}"
        state: absent
        provider: "{{ nxos_provider }}"
      with_items: "{{ vlans }}"

The only change required to ensure the VLAN does not exist is to change the state parameter to absent.

If you take a look at the Ansible docs for network modules you can see how many Nexus modules exist now in Ansible core. For those new to Ansible, being in Ansible Core means you get them when you install Ansible.

Here is a list of all the Nexus modules that you’ll also find at the Ansible docs link above.                                                                                                          

Beyond Configuration Management

Notice that not only are there a significant amount of modules for configuration management, but there are also quite a few for common operational tasks such as copying files to devices, rebooting devices, testing reachability (using ping), upgrading devices, and rolling back configurations (using the NX-OS checkpoint feature).

Let’s look at one of these. We’ll walk through how you can use multiple tasks in a given playbook to upgrade the operating system on Nexus switches.

Upgrading NX-OS Devices

While there is one module called nxos_install_os and that module does perform the actual upgrade, we’ll walk through the complete process starting with checking to see the current version of software on the Nexus switches and finishing with a task to assert the upgrade completed successfully.

The first three tasks we are going to have in our playbook are the following:

  1. Get current facts of each device so we can print the current version of NX-OS to the terminal during playbook execution
  2. Print (debug) the current version of NX-OS to the terminal
  3. Ensure the SCP server is enabled on each device so we can copy files

Here is what these tasks look like:

    provider: "{{ nxos_provider }}"

  debug: var=os

    feature: scp-server
    state: enabled
    provider: "{{ nxos_provider }}"

Note: nxos_provider is a variable that contains common parameters for the nxos_* modules. In this particular case, it’s a dictionary that has the following keys: username, password, transport, and host.

After we know that scp is enabled, we can confidently copy the required file(s) to each device. Our example requires the OS image file exists on the Ansible control host.

    local_file: "{{ image_path }}/{{ nxos_version }}"
    provider: "{{ nxos_provider }}"

In addition to nxos_provider, we used two other variables to define the local_file parameter. They are defined as the following:

nxos_version: nxos.7.0.3.I2.2d.bin
image_path: ../os-images/cisco/nxos
nxos_version_str: 7.0(3)I2(2d)

But, we also have a third variable that is the string representation of the new version (nxos_version_str), which we’ll use in an upcoming task.

Based on your deployment, think about using the delegate_to directive when using nxos_file_copy so you can copy files from another host in the data center, or some location that is closer to your devices than the Ansible control host.

Once we copy the image file to each switch, we’re ready to perform the final step: the upgrade. It sounds like a single task, but in reality, it’s a bit more than that. We’ll summarize these steps as the following:

  1. Start the Upgrade (which initiates the reboot)
  2. Ensure the device starts rebooting
  3. Ensure the device comes back online
  4. Gather facts again to collect new OS version
  5. Print (debug) the OS to the terminal
  6. Assert and Verify the expected version is running on the device

If we translate these steps into Ansible tasks, we end up with the following:

- block:
        system_image_file: "{{ nxos_version }}"
        provider: "{{ nxos_provider }}"
        port: 22
        state: stopped
        timeout: 300
        delay: 60
        host: "{{ inventory_hostname }}"

        port: 22
        state: started
        timeout: 300
        delay: 60
        host: "{{ inventory_hostname }}"


        provider: "{{ nxos_provider }}"

      debug: var=os

          - "'{{ os }}' == '{{ nxos_version_str }}'"

As you read through the preceding tasks, take note of the feature being used within the playbook called Ansible blocks. This is a critical feature to be aware of to account for errors when running a playbook and conditionally execute a group of tasks when an error occurs. Because Ansible doesn’t yet allow for a device to lose connectivity within a task, we need to assume a failure is going to occur. Basically, whenever you use nxos_install_os, the task will fail when the switch reboots (for now). Within a block, when a failure condition occurs, the tasks within the rescue block start executing.

The rescue tasks shown above ensure the device starts rebooting within 5 minutes and ensure the device comes back online within 5 minutes. Note that these tests were executed against Nexus 9396s and 9372s - if you are using 7K or 9K chassis devices, you may want to increase the timeout values.

Finally, the last three tasks always get executed and verify the upgrade was successful.

In upcoming posts, we’ll take a look at a few other use cases and show some live demos. Over time, we’ll get more complete examples on GitHub too.




December 05, 2016 12:00 AM

December 02, 2016

Networking Now (Juniper Blog)

BlackNurse in review: Is your NGFW vulnerable?

On November 10th, 2016, Danish firm TDC published a report about the effects of a particular ICMP Type+Code combination that triggers resource exhaustion issues within many leading Firewall platforms. The TDC SOC has branded this low-volume attack BlackNurse, details of which can be seen here, and here


"This is the best article and test we have to date on the BlackNurse attack. The article provides some answers which are not covered anywhere else. The structure and documentation of the test is remarkable. It would be nice to see the test performed on other firewalls – good job Craig ” 

Lenny Hansson and Kenneth Bjerregaard Jørgensen, BlackNurse Discoverers

by cdods at December 02, 2016 02:30 PM Blog (Ivan Pepelnjak)
XKCD Comics

December 01, 2016


How to Spot a Fake Facebook Account

Ever get a friend request from someone you don’t know and have never met before? More often than not, these accounts are created by criminals looking to harvest your personal information, or scam you in some other fashion. It typically starts when you receive a friend request from someone you don’t know. And you have no mutual friends in … Continue reading How to Spot a Fake Facebook Account

by Stefan Fouant at December 01, 2016 02:37 PM Blog (Ivan Pepelnjak)

Would You Use Avaya's SPBM Solution?

Got this comment to one of my L2-over-VXLAN blog posts:

I found the Avaya SPBM solution "right on the money" to build that L2+ fabric. Would you deploy Avaya SPBM?

Interestingly, I got that same question during one of the ExpertExpress engagements and here’s what I told that customer:

Read more ...

by Ivan Pepelnjak ( at December 01, 2016 08:33 AM

November 30, 2016

My Etherealmind

Video:Jenn Schiffer, Engineer/Artist – XOXO Festival

This video is fantastic. Funny, smart satire with real commentary. Worth watching.

The post Video:Jenn Schiffer, Engineer/Artist – XOXO Festival appeared first on EtherealMind.

by Greg Ferro at November 30, 2016 10:30 PM

Networking Now (Juniper Blog)

Journey to Securing Public and Hybrid Cloud Deployments

Screen Shot 2016-11-30 at 8.44.39 AM.png


Everyone agrees: IT infrastructure has, for the last several years, been migrating inexorably toward the cloud. When did that long journey start? How far have we come? And how much further do we have to go? Let’s take a look back at the history of the cloud.

by praviraj at November 30, 2016 04:47 PM Blog (Ivan Pepelnjak)

Finding Excuses to Avoid Network Automation

My Network Automation in Enterprise Environments blog post generated the expected responses, including:

Some of the environments I am looking at have around 2000-3000 devices and 6-7 vendors for various functions and 15-20 different device platform from those vendors. I am trying to understand what all environments can Ansible scale up to and what would be an ideal environment enterprises should be looking at more enterprise grade automation/orchestration platforms while keeping in mind that platform allows extensibility.

Luckily I didn’t have to write a response – one of the readers did an excellent job:

Read more ...

by Ivan Pepelnjak ( at November 30, 2016 07:44 AM

XKCD Comics

November 29, 2016

SNOsoft Research Team

Hacking casinos with zeroday exploits for fun and profit

Most popular email programs like Microsoft Outlook, Apple Mail, Thunderbird, etc. have a convenient feature that enables them to remember the email addresses of people that have been emailed.  Without this feature people would need to recall email addresses from memory or copy and paste from an address book. This same feature enables hackers to secretly breach networks using a technique that we created back in 2006 and named Email Seeding.

This article explains how we used Email Seeding to breach the network of a well-known and otherwise well protected casino.  As is always the case, this article has been augmented to protect the identity of our customer.

Lets begin…

Our initial objective was to gather intelligence about the casino’s employees.  To accomplish this, we developed a proprietary LinkedIn tool that uses the name or domain of a company and extracts employee information.  The information is compiled into a dossier of sorts that contains the name, title, employment history and contact information for each targeted individual.  Email address structure is automatically determined by our tool.

It is to our advantage if our customers use Google apps as was the case with the casino.  This is because Google suffers from a username enumeration vulnerability that allows hackers to extract valid email addresses.  For example, if we enter and the address does not exist then we get an error.  If we enter the same address and it does exist, we don’t get an error.  Our LinkedIn tool has native functionality that leverages this vulnerability which allows us to compile a targeted list of email addresses for Spear Phishing and/or Social Engineering.

We used this tool to compile a target list for the casino. Then we assembled an offensive micro-infrastructure to support a chameleon domain and its associated services.  The first step in this process to register a chameleon domain, which is a domain designed to impersonate a legitimate domain (with SSL certificates and all).  Historically this was accomplished by using a now obsolete IDN Homoglyph attack.  Today we rely on psychological trickery and influence the tendency of the human brain to autocorrect incorrectly spelled names while perceiving them as correct.

For example, let’s pretend that our casino’s name is Acme Corporation and that their domain is  A good chameleon domain would be or, which are both different than (read them carefully).  This technique works well for longer domains and obscure domains but is less ideal for shorter domains like or for example.  We have tactics for domains like that but won’t discuss them here.

There are a multitude of advantages to using a chameleon domain over traditional email spoofing techniques.  One example is that chameleon domains are highly interactive.  Not only can we send emails from chameleon domains but we can also receive emails.   This high-interaction capability helps to facilitate high-threat Social Engineering attacks.  Additionally, because chameleon domains are actually real domains they can be configured with SPF records, DKIM, etc.  In fact, in many cases we will even purchase SSL certificates for our chameleon domains.  All of these things help to create a credible infrastructure.  Finally, we always configure our chameleon domains with a catchall email address.  This ensures that any emails sent to our domain will be received.

Netragard maintains active contracts with various Virtual Private Server (VPS) providers.  These providers enable us to spin up and spin down chameleon infrastructures in short time.  They also enable us to spin up and spin down distributed platforms for more advanced things like distributed attacking, IDS/IPS saturation, etc. When we use our email seeding methodology we spin up a micro-infrastructure that offers DNS, Email, Web, and a Command & Control server for RADON.

For the casino, we deployed an augmented version of bind combined with something similar to honeytokens so that we could geographically locate our human targets.  Geolocation is important for impersonation as it helps to avoid the accidental face-to-face meetings. For example, if we’re impersonating John to attack Sally and they bump into each other at the office then there’s a high risk of operational exposure.

With the micro-infrastructure configured we began geolocating employees.  This was accomplished in part with social media platforms like Twitter, Facebook, etc. The employees that could not be located with social media were located using a secondary email campaign.  The campaign used a unique embedded tracker URL and tracker image.  Any time the host associated with the URL was resolved our DNS server would tell us what IP address the resolution was done from.  If the image was loaded (most were) then we’d receive the IP address as well as additional details about the browser, operating system in use by our target, etc.  We used the IP addressing information to plot rough geographic locations.

When we evaluated the data that we collected we found that the Casino’s employees (and contractors) worked from a variety of different locations.  One employee, Jack Smith, was particularly appealing because of his title which was “Security Manager” and his linked in profile talked about incident response and other related things.  It also appeared that Jack worked in a geographically dissimilar location to many potential targets.  Jack became our primary choice for impersonation.

With Jack selected we emailed 15 employees from   That email address is a chameleon address, note the “ec” is inverted to “ce”. Jack’s real email address would be  While we can’t disclose the content of the email that we used, it was something along the lines of:

“Hi <name>, did you get my last email?” 

Almost immediately after sending the email we received a 3 out of office auto-responses.  By the end of the next day we received 12 human responses indicating that we had a 100% success rate.  The 12 human responses were exciting because chances were high that we had successfully seeded our targets with Jack’s fake chameleon address.

After 4 days we received an email from an employee named Brian with the title “Director of IT Security”. Brian emailed us rather than emailing the real Jack because his email client auto-completed Jack’s email with our seeded address rather than Jack’s real one. Attached to the email was a Microsoft Word document.  When we opened the document we realized that we were looking at an incident report that Jack had originally emailed to Brian for comment.

While the report provided a treasure trove of information that would have been useful for carrying out a multitude of different attacks, the document and trust relationship between Jack and Brian was far more interesting.  For most customers we’d simply embed malware (RADON) into a document and use macro’s or some other low-tech method of execution.   For this customer, given that they were a high-profile casino with high-value targets, we decided to use a zeroday exploit for Microsoft Word rather than something noisy like a macro.

While the exploit was functional it was not flawless.  Despite this we were confident that exploitation would be successful.  The payload for the exploit was RADON, our home-grown zeroday malware and it was configured to connect back to our command and control server using one three different techniques. Each of the three techniques uses common network protocols and each communicates in using methods that appear normal as to evade detection.  The exact details on these techniques isn’t something that share because we use them regularly.

We delivered our now weaponized Microsoft Word document back to Brian with an email that suggested more updates were made.  Within 10 minutes of delivery RADON called home and we took covert control of Brian’s corporate desktop.

The next step was to move laterally and infect a few more targets to ensure that we did maintained access to the casino’s LAN.  The normal process for doing this would be to scan / probe the network and identify new targets.   We wanted to proceed with caution because we didn’t know if the Casino had any solutions to detect lateral movement.  So, to maintain stealth, rather than scanning the internal network we sniffed and monitored all network connections.

In addition to sniffing, our team also searched Brian’s computer for intelligence that would help to facilitate lateral movement.  Searching was carried out with extreme care as to avoid accessing potential bait files.  Bait files, when accessed, will trigger an alarm that alerts administrators and we could not afford to get caught in such early stages.  Aside from collecting network and filesystem information we also took screenshots every minute, activated Brian’s microphone, took frequent web-cam photographs and recorded his keystrokes using RADON.

After a few hours of automated reconnaissance, we began to analyze our findings.  One of the first things that caught our attention was a screen shot of Brian using TeamViewer.  This prompted us to search our keylogger recordings for Brian’s TeamViewer credentials and when we did we found them in short time.  We used his captured credentials to login to TeamViewer and were presented with a long list of servers belonging to the casino.  What was even more convenient was that credentials for those servers were stored in each server profile so all we had to do was click and pwn.  It was like Christmas for Hackers!

Our method from that point forward was simple.  We’d connect to a server, deploy RADON, use RADON to gather files, credentials, screenshots, etc.  Within 30-minutes we went from having a single point of access to having more control over the casino’s network than their own IT department. This was in large part because our control was completely centralized thanks to RADON and we weren’t limited by corporate polices, rules, etc.  (We are the hackers after all).

This was the first casino that we encountered with such a wide-scale deployment of TeamViewer.  When we asked our customer why they were using TeamViewer in this manner their answer was surprising.  The casino’s third party IT support company recommended that they use TeamViewer in place of RDP suggesting that it was more secure.  We of course demonstrated that this was not the case.  With our direction the casino removed TeamViewer and now requires all remote access to be handled over VPN with 2 factor authentication and RDP.

For the sake of clarity, much more work was done for the Casino than what was discussed here.  We don’t simply hack our clients, say thank you and leave them hanging. We do provide our customers with detailed custom reports and if required assistance with hardening. With that explained, this article was written with a specific focus on email seeding.   We felt that given the current threat landscape this was a good thing to be aware of because it makes for an easy breach.

The post Hacking casinos with zeroday exploits for fun and profit appeared first on Netragard.

by Adriel Desautels at November 29, 2016 11:07 PM


The Changing Landscape of Selling in the Age of SDN

There are massive waves of technology upheaval taking place in the marketplace, causing disruption and providing a challenge to technology salespeople who are used to selling in the traditional ways. Cloud, Automation, Mobility, Adaptive Security and the Internet of Things are just a few of the major changes affecting the landscape right now. And while these technologies are … Continue reading The Changing Landscape of Selling in the Age of SDN

by Stefan Fouant at November 29, 2016 06:45 PM

Networking Now (Juniper Blog)

Journey to Securing Public (AWS) and Hybrid Cloud Deployments

 Everyone agrees: IT infrastructure has, for the last several years, been migrating inexorably toward the cloud. When did that long journey start? How far have we come? And how much further do we have to go? Let’s take a look back at the history of the cloud.


Prior to 2005, virtually all enterprises built their own physical data centers, which demanded large upfront costs. The concept of virtualization began gaining traction around 2007, and enterprises slowly started moving parts of their physical data centers to private clouds using either Linux KVM or VMware hypervisors.

by praviraj at November 29, 2016 03:57 PM

The Networking Nerd

OpenFlow Is Dead. Long Live OpenFlow.

The King Is Dead - Long Live The King

Remember OpenFlow? The hammer that was set to solve all of our vaguely nail-like problems? Remember how everything was going to be based on OpenFlow going forward and the world was going to be a better place? Or how heretics like Ivan Pepelnjak (@IOSHints) that dared to ask questions about scalability or value of application were derided and laughed at? Yeah, good times. Today, I stand here to eulogize OpenFlow, but not to bury it. And perhaps find out that OpenFlow has a much happier life after death.

OpenFlow Is The Viagra Of Networking

OpenFlow is not that much different than Sildenafil, the active ingredient in Vigara. Both were initially developed to do something that they didn’t end up actually solving. In the case of Sildenafil, it was high blood pressure. The “side effect” of raising blood pressure in a specific body part wasn’t even realized until after the trials of the drug. The side effect because the primary focus of the medication that was eventually developed into a billion dollar industry.

In the same way, OpenFlow failed at its stated mission of replacing the forwarding plane programming method of switches. As pointed out by folks like Ivan, it had huge scalability issues. It was a bit clunky when it came to handling flow programming. The race from 1.0 to 1.3 spec finalization left vendors in the dust, but the freeze on 1.3 for the past few years has really hurt innovation. Objectively, the fact that almost no major shipping product uses OpenFlow as a forwarding paradigm should be evidence of it’s failure.

The side effect of OpenFlow is that it proved that networking could be done in software just as easily as it could be done in hardware. Things that we thought we historically needed ASICs and FPGAs to do could be done by a software construct. OpenFlow proved the viability of Software Defined Networking in a way that no one else could. Yet, as people abandoned it for other faster protocols or rewrote their stacks to take advantage of other methods, OpenFlow did still have a great number of uses.

OpenFlow Is a Garlic Press, Not A Hammer

OpenFlow isn’t really designed to solve every problem. It’s not a generic tool that can be used in a variety of situations. It has some very specific use cases that it does excel at doing, though. Think more like a garlic press. It’s a use case tool that is very specific for what it does and does that thing very well.

<iframe allowfullscreen="allowfullscreen" frameborder="0" height="329" mozallowfullscreen="mozallowfullscreen" src="" title="NEC Cyber Defense with Liviu Pinchas" webkitallowfullscreen="webkitallowfullscreen" width="584"></iframe>

This video from Networking Field Day 13 is a great example of OpenFlow being used for a specific task. NEC’s flavor of OpenFlow, ProgrammableFlow, is used on conjunction with higher layer services like firewalls and security appliances to mitigate the spread of infections. That’s a huge win for networking professionals. Think about how hard it would be to track down these systems in a network of thousands of devices. Even worse, with the level of virulence of modern malware, it doesn’t take long before the infected system has infected others. It’s not enough to shut down the payload. The infection behavior must be removed as well.

What NEC is showing is the ultimate way to stop this from happening. By interrogating the flows against a security policy, the flow entries can be removed from switches across the network or have deny entries written to prevent communications. Imagine being able to block a specific workstation from talking to anything on the network until it can be cleaned. And have that happen automatically without human interaction. What if a security service could get new malware or virus definitions and install those flow entries on the fly? Malware could be stopped before it became a problem.

This is where OpenFlow will be headed in the future. It’s no longer about adapting the problems to fit the protocol. We can’t keep trying to frame the problem around how much it resembles a nail just so we can use the hammer in our toolbox. Instead, OpenFlow will live on as a point protocol in a larger toolbox that can do a few things really well. That means we’ll use it when we need to and use a different tool when needed that better suits the problem we’re actually trying to solve. That will ensure that the best tool is used for the right job in every case.

Tom’s Take

OpenFlow is still useful. Look at what Coho Data is using it for. Or NEC. Or any one of a number of companies that are still developing on it. But the fact that these companies have put significant investment and time into the development of the protocol should tell you what the larger industry thinks. They believe that OpenFlow is a dead end that can’t magically solve the problems they have with their systems. So they’ve moved to a different hammer to bang away with. I think that OpenFlow is going to live a very happy life now that people are leaving it to solve the problems it’s good at solving. Maybe one day we’ll look back on the first life of OpenFlow not as a failure, but instead as the end of the beginning of it become what it was always meant to be.

by networkingnerd at November 29, 2016 12:21 PM Blog (Ivan Pepelnjak)

Worth Reading: So You Want to Become a Cloud Provider

My friend Robert Turnšek published an interesting blog post pondering whether it makes sense to become a cloud provider.

I loved reading it, particularly the Trap for System Integrators part, because I know a bit of the history, and could easily identify two or three failed or stalled projects per paragraph (like: “Just adding some blade servers and storage to the existing server environment won’t make you a cloud provider”). Hope you’ll have as much fun as I did.

by Ivan Pepelnjak ( at November 29, 2016 08:59 AM

November 28, 2016 Blog (Ivan Pepelnjak)

Q&A: Ingress Traffic Flow in Multi-Data Center Deployments

One of my readers was watching the Building Active-Active Data Centers webinar and sent me this question:

I'm wondering if you have additional info on how to address the ingress traffic flow issue? The egress is well explained but the ingress issue wasn't as well explained.

There’s a reason for that: there’s no good answer.

Read more ...

by Ivan Pepelnjak ( at November 28, 2016 08:46 AM

XKCD Comics

November 25, 2016 Blog (Ivan Pepelnjak)

StackStorm 101 on Software Gone Wild

A few weeks ago Matt Oswalt wrote an interesting blog post on principles of automation, and we quickly agreed it’s a nice starting point for a podcast episode.

In the meantime Matt moved to StackStorm team so that became the second focus of our chat… and then we figured out it would be great to bring in Matt Stone (the hero of Episode 13).

Read more ...

by Ivan Pepelnjak ( at November 25, 2016 08:34 AM

Network Design and Architecture

Don’t miss this opportunity

*|MC:SUBJECT|*     Black Friday & Cyber Monday Black Friday Thru Cyber Monday Special Discount Starts November 24 at 5PM PST Thru November 28 9PM PST 30% OFF On Below Products !  CCDE In-Depth  New CCDE Workbook buy now » Live/Instructor-Led  Online CCDE Training  buy now » Self Paced CCDE Training Lifetime Access buy now »  

The post Don’t miss this opportunity appeared first on Cisco Network Design and Architecture | CCDE Bootcamp |

by Orhan Ergun at November 25, 2016 08:23 AM

XKCD Comics

November 24, 2016

Networking Now (Juniper Blog)

The Thing is… What is an IoT Device (and should we care)?

Against the background of a thriving digital economy, it’s evident just how many of the technologies we rely on today are going to be a big part of our interconnected future. Yet, in just a few short years, much of what we now take for granted could change beyond recognition and in ways few of us might have predicted.

by lfisher at November 24, 2016 09:29 PM Blog (Ivan Pepelnjak)

Testing Ansible Playbooks with Cisco VIRL

Cisco VIRL is the ideal testing environment when you want to test your Ansible playbooks with various Cisco network operating systems (IOS, IOS XE, NX-OS or IOS XR). The “only” gotcha: how do you reach those devices from the outside world?

It was always possible to reach the management interface of devices running with VIRL, and it got even simpler with VIRL release 1.2.

by Ivan Pepelnjak ( at November 24, 2016 07:54 AM

November 23, 2016

The Networking Nerd

Nutanix and Plexxi – An Affinity to Converge


Nutanix has been lighting the hyperconverged world on fire as of late. Strong sales led to a big IPO for their stock. They are in a lot of conversations about using their solution in place of large traditional virtualization offerings that include things like blade servers or big boxes. And even coming off the recent Nutanix .NEXT conference there were some big announcements in the networking arena to help them complete their total solution. However, I think Nutanix is missing a big opportunity that’s right in front of them.

I think it’s time for Nutanix to buy Plexxi.

Software Says

If you look at the Nutanix announcements around networking from .NEXT, they look very familiar to anyone in the server space. The highlights include service chaining, microsegmentation, and monitoring all accessible through an API. If this sounds an awful lot like VMware NSX, Cisco ACI, or any one of a number of new networking companies then you are in the right mode of thinking as far as Nutanix is concerned.

SDN in the server space is all about overlay networking. Segmentation of flows and service chaining are the reason why security is so hard to do in the networking space today. Trying to get traffic to behave in a certain way drives networking professionals nuts. Monitoring all of that to ensure that you’re actually doing what you say you’re doing just adds complexity. And the API is the way to do all of that without having to walk down to the data center to console into a switch and learn a new non-Linux CLI command set.

SDN vendors like VMware and Cisco ACI would naturally have jumped onto these complaints and difficulties in the networking world and both have offered solutions for them with their products. For Nutanix to have bundled solutions like this into their networking offering is no accident. They are looking to battle VMware head-to-head and need to offer the kind of feature parity that it’s going to take a make medium to large shops shift their focus away from the VMware ecosystem and take a long look at what Nutanix is offering.

In a way, Nutanix and VMware are starting to reinforce the idea that the network isn’t a magical realm of protocols and tricks that make applications work. Instead, it’s a simple transport layer between locations. For instance, Amazon doesn’t rely on the magic of the interstate system to get your packages from the distribution center to your home. Instead, the interstate system is just a transport layer for their shipping overlays – UPS, FedEX, and so on. The overlay is where the real magic is happening.

Nutanix doesn’t care what your network looks like. They can do almost everything on top of it with their overlay protocols. That would seem to suggest that the focus going forward should be to marginalize or outright ignore the lower layers of the network in favor of something that Nutanix has visibility into and can offer control and monitoring of. That’s where the Plexxi play comes into focus.

Plexxi Logo

Affinity for Awesome

Plexxi has long been a company in search of a way to sell what they do best. When I first saw them years ago, they were touting their Affinities idea as a way to build fast pathways between endpoints to provide better performance for applications that naturally talked to each other. This was a great idea back then. But it quickly got overshadowed by the other SDN solutions out there. It even caused Plexxi to go down a slightly different path for a while looking at other options to compete in a market that they didn’t really have a perfect fit product.

But the Affinities idea is perfect for hyperconverged solutions. Companies like Nutanix are marking their solutions as the way to create application-focused compute nodes on-site without the need to mess with the cloud. It’s a scalable solution that will eventually lead to having multiple nodes in the future as your needs expand. Hyperconverged was designed to be consumable per compute unit as opposed to massively scaling out in leaps and bounds.

Plexxi Affinities is just the tip of the iceberg. Plexxi’s networking connectivity also gives Nutanix the ability to build out a high-speed interconnect network with one advantage – noninterference. I’m speaking about what happens when a customer needs to add more networking ports to support this architecture. They need to make a call to their Networking Vendor of Choice. In the case of Cisco, HPE, or others, that call will often involve a conversation about what they’re doing with the new network followed by a sales pitch for their hyperconverged solution or a partner solution that benefits both companies. Nutanix has a reputation for being the disruptor in traditional IT. The more they can keep their traditional competitors out of the conversation, the more likely they are to keep the business into the future.

Tom’s Take

Plexxi is very much a company with an interesting solution in need of a friend. They aren’t big enough to really partner with hyperconverged solutions, and most of the hyperconverged market at this point is either cozy with someone else or not looking to make big purchases. Nutanix has the rebel mentality. They move fast and strike quickly to get their deals done. They don’t take prisoners. They look to make a splash and get people talking. The best way to keep that up is to bundle a real non-software networking component alongside a solution that will make the application owners happy and keep the conversation focused on a single source. That’s how Cisco did it back and the day and how VMware has climbed to the top of the virtualization market.

If Nutanix were to spend some of that nice IPO money on a Plexxi Christmas present, I think 2017 would be the year that Nutanix stops being discussed in hushed whispers and becomes a real force to be reckoned with up and down the stack.

by networkingnerd at November 23, 2016 06:44 PM

Networking Now (Juniper Blog)

Automating Cyber Threat Intelligence with SkyATP: Part Two (with Splunk)


Continuing on with our series, this particular post will revolve around "Security Information and Event Management" solutions (SIEM's), their place in the Enterprise, and how you can leverage their exceptional levels of visibility within SkyATP. 

by cdods at November 23, 2016 05:22 PM

Honest Networker
Networking Now (Juniper Blog)

Automating Cyber Threat Intelligence with SkyATP: Part One

Each year, the economics of "fighting back" against Hacktivism, CyberCrime, and the occasional State-Sponsored attack become more and more untenable for the typical Enterprise. It's nearly impossible for the average Security Team to stay up to date with the latest emerging threats while also being tasked with their regular duties. Given the current economic climate, the luxury of having a dedicated team to perform Cyber Threat Intelligence (CTI) is generally out of reach for all but the largest of Enterprises. While automated identification, curation, and enforcement of CTI cannot truly replace human Security Analysts (yet), it has been shown to go a long way towards increasing the effectiveness and agility of your Security infrastructure. 

by cdods at November 23, 2016 04:39 PM

Honest Networker
Network Design and Architecture

These 7 people passed the CCDE Practical exam with my training

I am glad to announce that 7 of the attendees passed the CCDE Practical Lab exam in November 17, 2016 after attending my CCDE Training Program and got their CCDE numbers.   Chintan Sutaria – CCDE 2016::26 Mazin Ahsan – CCDE 2016::30 Tahir Munir – CCDE 2016::31 Michael Zsiga –  CCDE 2016::32 Felix Nkansah -CCCDE 2016::36 Andre Dufour  – CCDE […]

The post These 7 people passed the CCDE Practical exam with my training appeared first on Cisco Network Design and Architecture | CCDE Bootcamp |

by Orhan Ergun at November 23, 2016 03:24 PM


Book Review :: Juniper QFX5100 Series: A Comprehensive Guide to Building Next-Generation Networks

Juniper QFX5100 Series: A Comprehensive Guide to Building Next-Generation Networks by Douglas Richard Hanks, Jr. Paperback: 310 pages Publisher: O’Reilly Media ISBN-13: 978-1491949573 Much more than just a book about the QFX5100 This was an easy weekend read, and quite honestly I’d never thought I’d say this about a technical book but I literally could not put … Continue reading Book Review :: Juniper QFX5100 Series: A Comprehensive Guide to Building Next-Generation Networks

by Stefan Fouant at November 23, 2016 02:10 PM Blog (Ivan Pepelnjak)

Worth Reading: Creating the Future, One Press Release at a Time

Russ White wrote a great blog post about our failure to predict the future. The part I love most:

If the definition of insanity is doing the same things over and over again, each time expecting different results, what does that say about the world of network engineering?


by Ivan Pepelnjak ( at November 23, 2016 07:32 AM

XKCD Comics