April 29, 2017

Honest Networker

April 28, 2017

ipSpace.net Blog (Ivan Pepelnjak)

Salt and SaltStack on Software Gone Wild

Ansible, Puppet, Chef, Git, GitLab… the list of tools you can supposedly use to automate your network is endless, and there’s a new kid on the block every few months.

In Episode 77 of Software Gone Wild we explored Salt, its internal architecture, and how you can use it with Mircea Ulinic, a happy Salt user/contributor working for Cloudflare, and Seth House, developer @ SaltStack, the company behind Salt.

Read more ...

by Ivan Pepelnjak (noreply@blogger.com) at April 28, 2017 07:23 PM

Networking Now (Juniper Blog)

Nation States Move from Passive to Active Cyber Defences


If you’re looking for evidence in the public domain that any government has admitted to targeting another government’s civilian or military digital infrastructure - you won’t find much, for obvious reasons. To date, almost all official rhetoric has been about defending citizens and infrastructure against foreign states, but that is changing. In 2017 I believe we will see more nations move the narrative from one of passive defence, to one of a more active stance. 

by lfisher at April 28, 2017 01:35 PM

XKCD Comics

April 27, 2017

The Networking Nerd

Don’t Be My Guest

I’m interrupting my regularly scheduled musing about technology and networking to talk today about something that I’m increasingly seeing come across my communications channels. The growing market for people to “guest post” on blogs. Rather than continually point folks to my policies on this, I thought it might be good to break down why I choose to do what I do.

The Archive Of Tom

First and foremost, let me reiterate for the record: I do not accept guest posts on my site.

Note that this has nothing to do with your skills as a writer, your ability to create “compelling, fresh, and exciting content”, or your particular celebrity status as the CTO/CIO/COMGWTFBBQO of some hot, fresh, exciting new company. I’m sure if Kurt Vonnegut’s ghost or J.K. Rowling wanted to make a guest post on my blog, the answer would still be the same.

Why? Because this site is the archive of my thoughts. Because I want this to be an archive of my viewpoints on technology. I want people to know how I’ve grown and changed and come to love things like SDN over the years. What I don’t want is for people to need to look at a byline to figure out why the writer suddenly loves keynotes or suddenly decides that NAT is the best protocol ever. If the only person that ever writes here is me, all the things here are my voice and my views.

That’s not to say that the idea of guest posts or multiple writers of content is a bad thing. Take a look at Packet Pushers for instance. Greg, Ethan, and Drew do an awesome job of providing a community platform for people that want to write. If you’re not willing to setup your own blog, Packet Pushers is the next best option for you. They area the SaaS version of blogging – just type in the words and let the magic happen behind the screen.

However, Packet Pushers is a collection of many different viewpoints and can be confusing sometimes. The editorial staff does a great job of keeping their hands off the content outside of the general rules about posts. But that does mean that you could have two totally different viewpoints on a topic from two different writers that are posted at the same time. If you’re not normally known as a community content hub, the whiplash between these articles could be difficult to take.

The Dark Side Of Blogging

If the entire point of guest posting was to increase community engagement, I would very likely be looking at my policy and trying to find a way to do some kind of guest posting policy. The issue isn’t the writers, it’s what the people doing the “selling” are really looking for. Every time I get a pitch for a guest post, I immediately become suspicious of the motives behind it. I’ve done some of my own investigation and I firmly believe that there is more to this than meets the eye.

Pitch: Our CEO (Name Dropper) can offer your blog an increase in traffic with his thoughts on the following articles: (List of Crazy Titles)

Response: Okay, so why does he need to post on this blog? What advantage could he have for posting here and not on the corporate blog? Are you really trying to give me more traffic out of the goodness of your own heart? Or are you trying to game the system by using my blog as a lever to increase his name recognition with Google? He gains a lot more from me than I ever will from him, especially given that your suggested blog post titles are nowhere close to the content I write about.

Pitch: We want to provide an article for you to post under your own name to generate more visibility. All we ask is for a link back to our site in your article.

Reponse: More gaming the system. Google keeps track of the links back to your site and where they come from, so the more you get your name out there the higher your results. But as Google shuts down the more nefarious avenues, companies have to find places that Google actually likes to put up the links. Also, why does this link come wrapped in some kind of link shortener? Could it be because there are tons of tracking links and referral jumps in it? I would love to push back and tell them that I’m going to include my own link with no switches or extra parts of the URL and see how quickly the proposal is withdrawn when your tracking systems fail to work the way you intend. That’s not to say that all referral links are bad, but you can better believe that if there’s a referral link, I put it there.

Pitch: We want to pay you to put our content on your site

Response: I know what people pay to put content on major news sites. You’re hoping to game the system again by getting your content up somewhere for little to nothing compared to what a major content hub would cost. Why pay for major exposure when you can get 60% of that number of hits for a third of the cost? Besides, there’s no such thing as only taking money once for a post. Pretty soon everyone will be paying and the only content that will go up will be the kind of content that I don’t want on my blog.

Tom’s Take

If you really want to make a guest post on a site, I have some great suggestions. Packet Pushers or the site I help run for work GestaltIT.com are great community content areas. But this blog is not the place for that. I’m glad that you enjoy reading it as much as I enjoy writing it. But for now and for the foreseeable future, this is going to by my own little corner of the world.

Editor Note:

The original version of this article made reference to Network Computing in an unfair light. The error in my reference to their publishing model was completely incorrect and totally mine due to failure to do proper research. I have removed the incorrect information from this article after a conversation with Sue Fogarty.

Network Computing has a strict editorial policy about accepting content, including sponsored content. Throughout my relationship with them, I have found them to be completely fair and balanced. The error contained in this blog post was unforgivable and I apologize for it.

by networkingnerd at April 27, 2017 06:05 PM

My Etherealmind
ipSpace.net Blog (Ivan Pepelnjak)

Worth Reading: Who Moved My Control Plane?

Jordan Martin published a nice summary of what I’ve been preaching for years: centralized control plane doesn’t work (well) while controller-based network orchestration makes perfect sense.

While I totally agree with what he wrote he got the hype angle wrong:

Read more ...

by Ivan Pepelnjak (noreply@blogger.com) at April 27, 2017 08:30 AM

Update: VMware NSX in Redundant L3-only Data Center Fabric

Short update for those that read the original blog post: it turns out that the answer to the question “Is it possible to run VMware NSX on redundantly-connected hosts in a pure L3 data center fabric?” is still NO.

VTEPs from different ESXi hosts can be in different subnets, but while a single ESXi host might have multiple VTEPs, the only supported way to use them is to put them in the same subnet. I removed the original blog post.

A huge thank you to everyone who pushed me with their comments and emails to find the correct answer.

by Ivan Pepelnjak (noreply@blogger.com) at April 27, 2017 06:39 AM

April 26, 2017

My Etherealmind

Response: Don’t believe the non-programming hype – Paul’s blog

Paul Gear has a great response to a recent Packet Pushers Weekly episode on programming/automation and this particular view that I agree with: Programming isn’t hype; programming is a fundamental IT skill.  If you don’t understand the basics of computer architecture (e.g. CPU instruction pointers, registers, RAM, stacks, cache, etc.) and how to create instructions […]

The post Response: Don’t believe the non-programming hype – Paul’s blog appeared first on EtherealMind.

by Greg Ferro at April 26, 2017 04:02 PM

ipSpace.net Blog (Ivan Pepelnjak)

Mini-RSA in Zurich, NSX, ACI, Automation…

I’ll be doing several on-site workshops in the next two months. Here’s a brief summary of where you could meet me in person.

A bit of manual geolocation first: if you’re from Europe, check out the first few entries, if you’re from US, there’s important information for you at the bottom, and if you don’t want to travel Europe or US, there’s an online course starting in September ;)

Read more ...

by Ivan Pepelnjak (noreply@blogger.com) at April 26, 2017 08:06 AM

VMware NSX in Redundant L3-only Data Center Fabric

During the Networking in Private and Public Clouds webinar I got an interesting question: “Is it possible to run VMware NSX on redundantly-connected hosts in a pure L3 data center fabric?

TL&DR: I thought the answer is still No, but after a very helpful discussion with Anthony Burke it seems that it changed to Yes (even through the NSX Design Guide never explicitly says Yes, it’s OK and here’s how you do it).

Read more ...

by Ivan Pepelnjak (noreply@blogger.com) at April 26, 2017 06:16 AM

XKCD Comics

April 25, 2017

My Etherealmind

9 Easy Ways to Break a Cisco Network

Every day operation of a Cisco router is likely to cause failure.

The post 9 Easy Ways to Break a Cisco Network appeared first on EtherealMind.

by Greg Ferro at April 25, 2017 06:09 PM

April 24, 2017

Moving Packets

John’s Network Oops – As Seen On Reuters

In my response to The Network Collective’s group therapy session where the participants ‘fessed up to engineering sins, I promised to share my own personal nightmare story, as seen on Reuters. Grab a bag of popcorn, a mug of hot chocolate and your best ghost story flashlight, and I will share a tale which will haunt you for years to come. If you have small children in the room, this may be a good time to send them outside to play.

John Tells A Scary Story

At one point in my career I was a network engineer for a national mobile provider in the USA. The mobility market is a high-stakes environment, perhaps more so than most industry outsiders might expect. Users have surprisingly high expectations and are increasingly reliant on the availability of the network at all times of day or night.

High-Stakes Networking

Mobile networks are typically not just for consumers but are also used by a large number of private entities including fleet management companies, fire/burglar alarm systems, shipping companies and emergency services, so even a minor outage can potentially be a problem. These commercial organizations all had customized private connectivity to the mobile provider and thankfully we had a contractually-identified maintenance window available six days a week, during which all changes would have to happen. Nonetheless, even during a change window the attitude, rightly enough, was that if an interruption in service could be avoided, it should be. I refer to this as make before break engineering — a reference to electrical switches in which the new connection is made before the old connection is disconnected — and writing changes this way requires a different mindset from that found in a typical enterprise environment.

When the stakes are high, the stress is high, and with true gallows humor we would joke, somewhat tongue in cheek, that you weren’t a fully-fledged member of the team until you had caused an outage which you could read about on Reuters. It was a somewhat ironic badge of honor, in some ways. In many networking roles, losing connectivity for a few hours is just an annoyance. Think about it, though; have you ever heard or read a story in the news about a mobile provider having some kind of outage? The risk of damage to a provider’s reputation should not be underestimated, as reports of outages have a direct impact on customers’ perception about the reliability and capabilities of each provider when they’re making a choice about their next mobile contract, and that means a direct impact on the bottom line.

My Reuters Moment

While I’m not proud of it, I do have the aforementioned badge of honor (and possibly the t-shirt as well). As background, I should explain that one of my roles at this particular mobile provider was to manage internet peering for the data centers. Internally, we had backhaul between the public-facing addresses for each site (so we would not have to transit the public internet when a service was not local), so internally we knew all our public routes, but externally we carefully filtered what we advertised to the Internet to ensure that traffic from outside the provider came to the right place.

Technical Error

The error I made was when updating a route-map on our edge internet routers at Data Center A. My intent had been to add a new sequence something like this:

route-map RM_OUTBOUND_TO_INTERNET seq 700 permit
 match ip prefix-list PL_LocalRoutes

Simple, right? Unfortunately, at some point during the creation of my MOP (Method Of Procedure, or a change script), I had managed to mistype the name of the prefix-list, and my change instead read like this:

route-map RM_OUTBOUND_TO_INTERNET seq 700 permit
 match ip prefix-list PL LocalRoutes

The MOP had been through reviews both within Engineering and with Operations, and nobody had spotted the error, and so the change was scheduled for execution. At this point it is worth explaining that this company had strict separation of duties between Operations and Engineering; Engineering wrote the MOPs, but weren’t allowed to execute them. Operations executed the MOPs, but weren’t allowed to write them; my access to the routers as an Engineer was read-only. I’ve posted previously about writing a MOP in order that it can be successfully executed by another person and I recommend reading that post too. While it’s a pain to have to write changes out in such detail, the upside is that I didn’t actually have to be there at 4AM when the change was being executed. After all, how could I help?

Fast Forward to 11:30AM

Somewhere around 11:30AM the morning of my 4AM internet change at Data Center A, I received an email asking if I had heard about an outage in Data Center B, and wondering if I could help take a look because they couldn’t figure out what had happened. This was the first report I’d heard about it, so I asked for further details of what was happening. Data Center B, it seems, was mostly offline. Throughput was way down on the internet-facing firewalls, and users going through that site were reporting that they couldn’t access many services. I thought about this for a minute, issued one command on the edge router at Data Center A, and I was able to confirm that the root cause was the change made on my behalf. I told them to roll the change back per my change script, and the problem would disappear, and within 10 minutes — by 11:45AM or so — service had been restored.

Root Cause

I learned something important that morning about Cisco IOS route-map configuration; did you know that you can match more than one prefix-list within the same match command? i.e. it’s valid to have:

match ip prefix-list PL1 PL2 PL3 PL4 PL5

This is handy to know, because it means that my typo:

match ip prefix-list PL LocalRoutes

…was not rejected as a syntax error by IOS. Instead, it was interpreted as being a request to match a route in either of two prefix-lists, one called PL and one called LocalRoutes. In true IOS fashion, there was also no warning or error about the fact that the command was referencing two prefix-lists, neither of which exist.

Another helpful thing to understand is that when a prefix-list is non-existent, Cisco IOS treats it as a match all clause. Thus, instead of only matching the list of networks in PL_LocalRoutes, my route-map statement now matched all routes, and that included the our internal routes to the public ranges in other data centers.

The end result was that Data Center A was advertising routes which belonged to Data Center B, and consequently traffic was going to the wrong place and while some of it was permitted to transit our internal network to Data Center B, the return path from B to the Internet didn’t include Data Center A, so there was an asymmetrical path through the firewalls which meant the sessions never established.As Seen On Reuters


The outage had been running from 4:15AM until around 11:45AM, but it had only been noticed at around 7:15AM. Needless to say this extended way beyond our maintenance window. Customers were complaining, and when I jumped on google to see if there was any word about an outage affecting (roughly) a quarter of the American population, I was rewarded with the a page and a half of news reports about it, and top of the list was Reuters. Level up! The Reuters badge, I found out, comes with a complimentary wave of nausea.

The Aftermath

The Command

What command did I issue to figure out what was going on?

show ip bgp neighbor a.b.c.d advertised-routes | inc Total

While I’m not quite at the level where I can fix radios by thinking, I was able to listen to the symptoms, think about what might cause them, realize that my change involved one of those potential causes (i.e. that I was advertising too many routes from Data Center A), and was able to validate my theory fairly easily. I knew how many routes were advertised before my change, and I knew how many routes I had intended to allow in addition, so when I checked how many routes were being advertised to one of our internet providers and saw a significantly larger number than expected, it was obvious what was wrong. I didn’t immediately know why it had happened, but I knew what had happened. Once I was knew that it was the route-map change which had evidently not gone to plan, the space in the middle of the prefix-list name was an easy thing to discover.

Why Wasn’t The Problem Noticed Earlier?

Why was it 7AM before a problem was identified? The answer to this is both good and bad. During maintenance windows, the NOC were used to seeing anomalies in device performance and traffic flows as we made changes, so a culture had built up whereby anomalies would be ignored during the maintenance window, even if we had not advised that such anomalies should be expected. After the BGP change was made, traffic for Data Center B was coming in to Data Center A, and the internet-facing firewalls were blocking a huge number of sessions, and the idle session count was through the roof. CPU had doubled because two data centers’ traffic was hitting the firewall. In all cases, while these symptoms had been noted, they were ignored as the normal fluctuation during a big change.

With the benefit of hindsight, obviously the NOC would not have done this, but at that time, it’s what happened. Even at the end of the maintenance window at 6:15AM, the firewall statistics were clearly abnormal – but the NOC was changing shifts around then, and the message had been passed on by the outgoing shift that they were ignoring the firewall statistics due to maintenance activities, and consequently the next shift continued to ignore it for the next hour before somebody again questioned why the utilization and failed session statistics were still so high. This was an outage extender (i.e. something which wasn’t causal, but extended the outage beyond the point at which it could or should have been identified and fixed), because the issue had been in place for three hours already before anybody started looking at it, and we had already exited the agreed maintenance window.

Why Wasn’t I Called Earlier?

Perhaps understandably, when an outage occurs in Data Center B, Operations did not immediately consider changes made in Data Center A. Even when I was eventually contacted, it was to get help troubleshooting, not because my change was suspected of being the cause. This was a lesson learned; the data centers were inherently coupled when it came to public IP space and internet access, so it was important to always consider that coupling when an issue arose. Again, this doesn’t change the root cause of the problem, but it’s another extended. Once I was called, I identified the problem within five minutes and service was restored 10 minutes after that.

Surely You Tested After The Change?

We did test after the change. Data Center A — where we made the change — was working perfectly. We did not, however, test Data Center B. Why would we? The change was in Data Center A. Another lesson learned, and a good case study in considering the potential downstream impact of a change.

Hey Mr Hypocrite, Where’s Your Implementation Test Plan?

Where was my test plan? In the script, actually. Every change in the MOP was followed up a set of test steps to validate the correct implementation of the change. Before changing the route-map, the MOP gave the commands to test and note down the number of routes being sent to each internet peer. The MOP specified how many new routes should be advertised, and then post-change I had included specific checks on how many routes we were advertising to each internet BGP peer, and noted that the number should be [routes_before] + [added_routes].

When the Operations engineer checked their session logs, they admitted honestly that they had evidently not issued the commands specified in the MOP to validate the post-change routing. Once more this is an outage extender because had the commands been issued and the route counts had not matched what was specified in the MOP, the MOP directed the Operations engineer to stop and roll back from that point. Had the tests been carried out, the problem would have been identified at 4:20AM and rolled back by 4:25AM, limiting to 10 minutes an outage which eventually lasted nearly seven and a half hours.

Whose Fault Is It Anyway?

It was my fault. I produced a MOP with a typo in it, so the root cause of the outage is all mine. However, were it not for an unfortunate storm of bad assumptions and incomplete process execution, the incident could have been identified and resolved well within the maintenance window, and somebody at Reuters could have had a quieter morning. Similarly, I would not have spent the next two days putting together a detailed Root Cause Analysis document for management and generally feeling like the worst engineer in the world.

Was I Fired?

No, I was not. I owned up to my typo, but with so many other elements contributing to the outage, it would have been very unfair if the company had singled me out. Instead, I worked with Operations to find ways to avoid this kind of issue in the future and create the necessary policy to support that goal.

Lessons to Learn

I noted a number of lessons learned on the way through, but as a brief summary:

  • Fix the outage first; point fingers later
  • Own up to your mistakes
  • Always question anomalies and see if the answers make sense
  • Always have a thorough test plan including the expected results
  • Always execute the test plan…
  • Consider downstream impacts and environment which may have a shared fate
  • Don’t do it again! Once is unfortunate; twice is just careless. Figure out what you need to do to ensure that you don’t repeat the same mistake.

I think that’s more than enough from me. If you have your own horror stories I’d love to hear them, and if you haven’t listened to The Network Collective, Episode 1, you should, because you’ll hear about some more bad days happening to other people and you can empathize or cackle with the schadenfreude, as is your preference.

Important Note

Some times, places, people and technical details about this incident have been changed to protect the guilty. And also to stop you finding it in Reuters’ archives…

If you liked this post, please do click through to the source at John’s Network Oops – As Seen On Reuters and give me a share/like. Thank you!

by John Herbert at April 24, 2017 03:52 PM

My Etherealmind
ipSpace.net Blog (Ivan Pepelnjak)

Figure Out What the Customer Really Needs

One of the toughest challenges you can face as a networking engineer is trying to understand what the customer really needs (as opposed to what they think they’re telling you they want).

For example, the server team comes to you saying “we need 5 VLANs between these 3 data centers”. What do you do?

Read more ...

by Ivan Pepelnjak (noreply@blogger.com) at April 24, 2017 09:51 AM

Security to the Core | Arbor Networks Security

Observed Spike in DDoS Attacks Targeting Hong Kong

Introduction Each week ASERT produces a weekly threat intelligence bulletin for Arbor customers. In addition to providing insights into the week’s security news and reviewing ASERT’s threat research activities, we also summarize the weeks DDoS attack data as reported by over 330 global Internet Service […]

by Kirk Soluk at April 24, 2017 12:39 AM

XKCD Comics

April 22, 2017

Jason Edelman's Blog

Self Driving Cars and Network Automation

Last year at Interop, there was a great mini-conference dedicated to the DevOps for Networking community. In that session, I kicked off the day with a general view of where the industry was with respect to the intersection of DevOps and networking with a focus on network automation.

One of the analogies I made was comparing network automation to self-driving cars posing the question, “Are they real?”…“Are they real for us (the consumer)?”

Self-Driving Cars

No, they are not, but I continued to make the analogy. Is complete network automation real today? While, the answer is yes, it’s not really a reality for most…yet.

So, what’s the connection between self-driving cars and network automation?

Start small and expand. Pick a problem, solve it, and integrate it.

Self-Driving Cars are Coming

While self-driving cars aren’t a reality for us to buy and purchase today, intelligent cars are– these are cars that have high-value services and features enhancing the way we drive, our safety, and much more generally, the way we in which we consume the streets and infrastructure around us.

Intelligent Cars

These include automated features like self-parking, back-up cameras, automated beeping as you back-up, automatic-brakes, GPS, and computer systems that give you a plethora of visibility about the inner workings of the car (complex system). So yes, you better believe it. The self-driving car is coming– one feature, chip, feedback loop, and computer program at a time.

Network Automation is Coming

All of the pieces are actually here already!

Achieving network automation is hard, very hard. But it’s actually not if you break it down into achievable milestones. Maybe it’s something like the following:

  • Generate automated reports and documentation for Campus Access layer and expand networks from there. You don’t need to start with every network type.
  • Create proper configuration templates for each new device type or for each new service being deployed. Again, you don’t need to start with every device or network type.
  • Create a compliance check for credentials in one part of the network and gradually expand compliance checks and networks checked against.
  • Standing up a new site? Look into zero touch provisioning.
  • Having a problem with bad switches in a stack or linecards in a chassis? Perfect problem to solve.

As use-cases like this are being solved week after week, you’ll have short-term wins proving the value of automation, but also be moving towards the bigger picture of deploying services, integrating into 3rd party platforms, creating relevant feedback loops, offering APIs to the business, and much more.

The biggest takeaway is to make sure you build a plan, know it’ll take time to achieve, and break it up into achievable milestones. It’ll be a win for everyone involved.



April 22, 2017 12:00 AM

April 21, 2017

The Networking Nerd

The Future Of SDN Is Up In The Air

The announcement this week that Riverbed is buying Xirrus was a huge sign that the user-facing edge of the network is the new battleground for SDN and SD-WAN adoption. Riverbed is coming off a number of recent acquisitions in the SDN space, including Ocedo just over a year ago. So, why then, would Riverbed chase down a wireless company when they’re so focused on the wiring behind the walls?

The New User Experience

When SDN was a pile of buzzwords attached to an idea that had just come out of Stanford, a lot of people were trying to figure out just what exactly SDN could offer them in terms of their network. Things like network slicing were the first big pieces to be put up before things like orchestration, programmability, and APIs were really brought to the fore. People were trying to figure out how to make this hot new thing work for them. Well, almost everyone.

Wireless professionals are a bit jaded when it comes to SDN. That’s because they’ve seen it already in the form of controller-based solutions. The idea that a central device can issue commands to remote access devices and control configurations easily? Airespace was doing that over a decade ago before they got bought by Cisco. Programmability is a moot point to people that can import thousands of access points into a device and automatically have new SSIDs being broadcast on them all in a matter of seconds. Even the new crop of “controllerless” wireless systems on the market still have a central control infrastructure that sends commands to the APs. Much like we’ve found in recent years with SDN, removing the control plane from the data plane path has significant advantages.

So, what would it take to excite wireless pros about SDN? Well, as it turns out, the issue comes down to the user side of the equation. Wireless networks work very well in today’s enterprise. They form the backbone of user connectivity. Companies like Aruba are experimenting with all-wireless offices. The concept is crazy at first glance. How will users communicate without phones? As it turns out, most of them have been using instant messengers and soft phone programs for years. Their communications infrastructure has changed significantly since I learned how to install phone systems years ago. But what hasn’t changed is the need to get these applications to play nicely with each other.

Application behavior and analysis is a huge selling point for SDN and, by extension, SD-WAN. Being able to classify application traffic running on a desktop and treat it differently based on criteria like voice traffic versus web browsing traffic is huge for network professionals. This means the complicated configurations of QoS back in the day can be abstracted out of the network devices and handled by more intelligent systems further up the stack. The hard work can be done where it should be done – by systems with unencumbered CPUs making intelligent decisions rather than by devices that are processing packets as quickly as possible. These decisions can only be made if the traffic is correctly marked and identified as close to the point of origin as possible. That’s where Riverbed and Xirrus come into play.

Extending Your Brains To Your Fingers

By purchasing a company like Xirrus, Riverbed can build on their plans for SDN and SD-WAN by incorporating their software technology into the wireless edge. By classifying the applications where they live, the wireless APs can provide the right information to the SDN processes to ensure traffic is dealt with properly as it flies through the network. With SD-WAN technologies, that can mean making sure the web browsing traffic is sent through local internet links when traffic meant for main sites, like communications or enterprise applications, can be sent via encrypted tunnels and monitored for SLA performance.

Network professionals can utilize SDN and SD-WAN to make things run much more smoothly for remote users without the need to install cumbersome appliances at the edge to do the classification. Instead, the remote APs now become the devices needed to make this happen. It’s brilliant when you realize how much more effective it can be to deploy a larger number of connectivity devices that contain software for application analysis than it is to drop a huge server into a branch office where it’s not needed.

With the deployment of these remote devices, Riverbed can continue to build on the software side of technology by increasing the capabilities of these devices while not requiring new hardware every time a change comes out. You may need to upgrade your APs when a new technology shift happens in hardware, like when 802.11ax is finally released, but that shouldn’t happen for years. Instead, you can enjoy the benefits of using SDN and SD-WAN to accelerate your user’s applications.

Tom’s Take

Fortinet bought Meru. HPE bought Aruba. Now, Riverbed is buying Xirrus. The consolidation of the wireless market is about more than just finding a solution to augment your campus networking. It’s about building a platform that uses wireless networking as a delivery mechanism to provide additional value to users. The spectrum part of wireless is always going to be hard to do properly. Now, the additional benefit of turning those devices into SDN sensors is a huge value point for enterprise networking professionals as well. What better way to magically deploy SDN in your network than to flip a switch and have it everywhere all at once?

by networkingnerd at April 21, 2017 06:16 PM

My Etherealmind
ipSpace.net Blog (Ivan Pepelnjak)
XKCD Comics

April 20, 2017

Networking Now (Juniper Blog)

Securing Enterprise Hybrid Clouds with Industry-Leading High-Performance Next-Generation Firewalls

Juniper Networks SRX4000 line of next-generation firewalls set a new benchmark for price and performance while enabling secure migration into hybrid clouds


As enterprises grow more dependent on cloud technologies, they need to begin adopting hybrid cloud architectures to provide greater flexibility and economic benefits. This, however, is easier said than done.


Migrating to a hybrid cloud model and deploying point firewalls presents its own set of challenges, including:


• Performance degradation at scale impacting security effectiveness

• Complex security management

• Weak connectivity between data centers

• Increased risk surface


Unfortunately, point and legacy firewalls are poorly suited for hybrid cloud environments, creating an immediate need for solutions that can provide:


  • Faster threat detection and blocking: Fully integrated, cloud-informed threat prevention (such as Juniper Networks Sky Advanced Threat Prevention) offers immediate, actionable intelligence; scalability; and integrated security services that keep you up to date and defend against the very latest threats.
  • Effective security everywhere: An architecture powered by Juniper’s Software-Defined Secure Network (SDSN) platform lets enterprises easily implement and efficiently operate their security infrastructure. An ecosystem that is continually learning about new threats enables faster enforcement and consistent security across your hybrid cloud environment, keeping costs down.
  • Flexible and scalable architecture: Building secure environments across private and public cloud data centers helps you keep your network running, delivering resiliency, high-performance NGFW functionality, complete application visibility and control, and effective threat defense.
  • Smarter control and visibility: Intuitive, scalable management tools and analytics provide actionable intelligence that empowers teams to do more with fewer resources, keeping operational costs down.
  • Industry-leading, high-performance NGFWs: Juniper’s efficient and effective physical and virtual SRX Series NGFWs optimize security, allowing you to easily implement defenses and operate them more efficiently without compromising performance.


An effective hybrid cloud solution, working along with high-performance physical and virtual next-generation firewalls deployed in private and public data centers, provides business resiliency, visibility and control, analytics, and automation—all of which help enterprises reduce business risk and focus on business critical problems.


A Software-Defined Secure Network builds threat detection, enforcement, and remediation into the very fabric of your network. Powered by Juniper’s high-performance NGFWs, along with smarter and faster application visibility and control, the Juniper hybrid cloud security architecture provides flexible, end-to-end security, allowing enterprises to protect their data within private and public data centers, campuses, or regional headquarters.


To learn more about Juniper Networks SRX Series NGFWs and how Juniper’s security solutions seamlessly extend across private and public cloud architectures without compromising performance and manageability, please download our Securing Enterprise Hybrid Clouds solution brief.


To learn more about SRX4000 Services Gateways please visit SRX4000 Services Gateways.

by abdis at April 20, 2017 06:39 PM

My Etherealmind
Router Jockey

PCAP t-shirts just in time for CLUS17

Hey guys, I just wanted to drop a quick note to let you know that I’ve relaunched my teespring shirt campaigns with enough time that you should get your orders before Cisco Live US 2017. I’ve got several types of clothing under each design, so make sure you look to see if I have what you’re looking for. This campaign is only open for 14 days – so get yours while you can!

As usual, send comments / suggestions / etc to @tonhe on twitter.

Thanks again, and I hope to see you at #CLUS17

Click below to enter my teespring storefront

The post PCAP t-shirts just in time for CLUS17 appeared first on Router Jockey.

by Tony Mattke at April 20, 2017 06:11 PM

My Etherealmind
ipSpace.net Blog (Ivan Pepelnjak)
Networking Now (Juniper Blog)

Nation States move from passive to active cyber defences


Look for evidence in the public domain that any government has admitted to targeting another government’s civilian or military digital infrastructure and you won’t find much, for obvious reasons.  To date, almost all official rhetoric has been about defending citizens and infrastructure against foreign states, but that is changing. In 2017 I believe we will see more nations move the narrative from one of passive defence, to a more active stance.

by lfisher at April 20, 2017 08:00 AM

April 19, 2017

Ethan Banks on Technology

RESPONSE: 3 Hidden Lessons Behind Top Podcasts to Help Yours Stand Out

Thoughts from the Content Marketing Institute for budding podcasters were shared here. Here’s my response to the points that stood out to me.

CMI’s big idea #1.

“At first, format trumps talent.” And then later…“Avoid the race to the bottom of simply booking the biggest guests in your niche and meandering through an unplanned episode. Instead, find your format.”

Response. To record an effective show people will listen to, you need a plan, agreed. However, the article cites an example of a 15 minute long episode carved into blocks of minutes and seconds.

Perhaps that’s what you need when working against an ultra-tight timeline. However, an outline that provides structure should be adequate. Overly structuring a podcast is burdensome and can serve to stifle interesting conversation. Freedom is one of the benefits of podcasting.

Podcasting is NOT a digital regurgitation of radio, although many try to shoehorn podcasts into a radio format, because the radio business is what they understand. However, podcast content is different. Distribution is different. Listener consumption is different. Monetization is different.

And perhaps most importantly, timelines are fluid. 15 minute long podcasts are being created under an artificial time constraint that begs the question…why?

On the other hand, having no format at all before hitting record is indeed bad news. Wandering, random conversations are wastes of the listeners’ time. Stay on point enough to maximize the amount of information you’re sharing or able to get your guest to share. Writing a solid outline ahead of time will get that done.

A great deal of my time each week is spent researching my guest (if there is one), reading about our topic(s), and constructing an outline with a story arc that will engage the listener. That’s my “format” such as it is, and it’s all you need. Don’t obsess about music, corny bits, falling precisely onto specific minute and second marks, etc. Just get the content right–that’s most of the battle.

CMI’s big idea #2.

“Time constraints are your strength (Spoiler alert: Nobody wants your 60-minute show).”

Response. This is flat-out wrong. The length of your podcast episode has everything to do with fair treatment of the material chosen for the episode and nothing to do with creating a bunch of abbreviated episodes to stuff the future download queue.

As a podcaster, you must know how to move the conversation along. There comes a point where you’ve talked about a topic enough, and it’s time to get to the next thing. On the other hand, many subjects offer tangents that are worth exploring. Podcasting is about right-sizing the time spent as the conversation progresses.

The limit of your podcast episode is not constrained by the clock. The limit is when there’s nothing else worth discussing. That is decidedly a balancing act, as no one has the attention span for your ocean-boiling, yak-shaving carrying on even if it’s interesting. There is some limit. But I can say with confidence that 60 minutes isn’t necessarily that limit.

Case in point: a Tim Ferriss Show episode is routinely over an hour, and not infrequently more than two. Tim’s show is perhaps an outlier, but it’s one of the most popular podcasts in the world for a reason. The content is just that compelling, despite the length of the shows.

A second case in point is the TED Radio Hour on NPR. This frustrating show is constrained by a very specific format, as it’s not only a podcast, it’s also a broadcast radio feature. Therefore, the content ends up as a mix of stretching out some segments longer than they need go, while also rushing through certain guests that clearly had more to offer.

The TED Radio Hour is slickly produced and predictable, but the ultimate value for the listener is sacrificed on the altar of format, length being a chief limitation. Too bad. In TED Radio Hour’s heyday, which I believe is long past, they found some very interesting topics and people.

Final case in point comes from experiences with my own shows. Show duration is just not a problem.

  1. I know from interacting with hundreds of audience members over many years that 60 minutes is just fine. They have long commutes. They want to flex their brains while mowing the lawn. Etc. Time is something they are willing to expend on worthwhile content, and therefore, they want to hear complex topics treated fairly.
  2. On one show, my co-host and I experimented with locking down the show to 30 minutes. No one thanked us for this. In fact, the opposite was true. We had listeners tell us that the longer shows were better, and so we went back to the longer format.

If your show is decent and the subjects you choose demand it, someone will want your 60-minute show–just so long as you aren’t waffling on after you ran out of worthwhile things to say.

CMI’s big idea #3.

“Create recurring segments or content brands within the show.”

Response. I have no big disagreement with this point, but don’t obsess about it. When your show is new, it hasn’t yet found its voice. Give your show ten or more episodes to settle into a groove, then see what segments naturally occur.

Once you’ve picked them out, run with them, but don’t be a slave to them. You don’t have to have material for a recurrent segment every single show. Do it when you’ve got it, but don’t force it if you don’t.

For example, on Citizens of Tech, we have “Content I Like” and “Today I Learned” segments on pretty much every show. Eric and I are always finding interesting things we want to share for those bits. On the other hand, we also have “Privacy Watch” and “Deathwatch” segments, but we don’t run either of those two segments every show. There’s just not enough interesting content to fill those segments every episode.

Stop overthinking.

There is no one-size-fits-all to podcasting. What works for one audience won’t work for another. However, the opportunities are endless. Stop trying to find a magic formula that will gain you audience. I don’t care how good your show is, audience will take a long time to accumulate if you don’t have an existing audience to use as a launching pad for your new show. (And maybe even if you do.)

So…forget about all of that. Be creative. Be different. But be focused, delivering a consistent product that is, at the end of the day, yours. If it’s good, the audience will follow if you’re patient enough and perhaps get a few lucky breaks.

The podcasts I am the most interested in now tend to be rather “out there.” Weird stuff, with odd formats. I appreciate slick production values, but at the same time I’m sick of homogenized polish that render podcasts sterile, canned, and phony.

Make something you want to listen to. Other people will want to listen to it, too.

Ethan Banks writes & podcasts about IT, new media, and personal tech.
about | subscribe | @ecbanks

by Ethan Banks at April 19, 2017 06:52 PM

My Etherealmind

IPv6 Extensions Are Already Dead

In this wiki entry disguised as a RFC 7872, “Observations on the Dropping of Packets with IPv6 Extension Headers in the Real World” highlights IPv6 Extension Headers are effectively unusable since internet providers are dropping IPv6 fragment and failing to support Extension Headers.  In IPv6, an extension header is any header that follows the initial 40 […]

The post IPv6 Extensions Are Already Dead appeared first on EtherealMind.

by Greg Ferro at April 19, 2017 03:27 PM

ipSpace.net Blog (Ivan Pepelnjak)

Amazing Discovery: Stability Matters

Here’s an interesting blog post (particularly as it’s coming from a well-known cloud evangelist): at the infrastructure level stability matters more than agility or speed-of-deployment. Welcome to real world ;)

by Ivan Pepelnjak (noreply@blogger.com) at April 19, 2017 06:11 AM

XKCD Comics

April 18, 2017

My Etherealmind

Cisco IOS-XR: the buggy XML API

Cisco still can't write reliable applications for its own IOS-XR operating system

The post Cisco IOS-XR: the buggy XML API appeared first on EtherealMind.

by Greg Ferro at April 18, 2017 06:39 PM

Moving Packets

Response: The Network Collective, Episode 1

The Network Collective

The end of March brought with it the first episode of a neat new project called The Network Collective, a video roundtable for networking engineers. The hosts and co-founders of this escapade are Jordan Martin (@BCJordo), Eyvonne Sharp (@SharpNetwork) and Phil Gervasi (@Network_Phil).

Top 10 Ways To Break Your Network

The Network Collective, Episode 1

Episode 1 brought three guests to the virtual table: Carl Fugate, Mike Zsiga and Jody Lemoine, the latter of whom (top right on the YouTube video) is actually blurry in real life, and this is not a video artifact. The topic for discussion was the Top 10 Ways To Break Your Network. Thankfully, the show didn’t actually provide tips on how to break your network — as if we need any help doing that — but instead looked at the shameful ways in which each participant had managed to cause network destruction in the past, and what lessons could be learned.

The fact that five of six experienced professionals are willing to own up to their blunders (one brought a colleague’s mistake to put up on the chopping block) actually signals one of the most important lessons that the episode highlighted, which is to be honest and own up to your mistakes. It is better for your career to do that than to pretend that you have no idea how an outage happened. Trust me; I have a very particular set of skills, skills I have acquired over a very long career. Skills that make me a nightmare for people who cause outages. If you admit your error up front, that’ll be the end of it. I will not throw you under the bus; I will not pursue you. But if you try to cover up your error, I will look for you, I will find you, and I will hang you out to dry. But I digress… The long and short of it is that if I waste my time tracking down the source of a problem which somebody knew all along but didn’t want to admit to, I’m going to be pretty steamed. As a consultant for 16 years, one of many mantras I learned to live by is this:

Bad News Doesn't Get Better With Age

It’s All About The Environment

With that said, I feel that there’s another important lesson here, and it’s for management rather than the engineers. As a manager, it behooves you to create an environment which encourages honesty instead of punishing it. I have worked in environments where the most important part of finding the root cause to an outage was appointing blame to an individual. Guess what? Nobody ever wanted to own up to doing anything because they were fearful for their jobs. If you’re currently thinking Well, of course, that’s obvious!, you’d be right, yet I’ve seen and heard about companies like this far too often. How does your company treat honest mistakes?

Confession Time

We all make mistakes. The reasons for those mistakes vary from carelessness and over-confidence through to ignorance, software bugs, unfamiliarity with an environment and sheer bad luck. However, they are mistakes and — other than in circumstances of exceptional disgruntlement — are not an intentional attempt to take down a network. I don’t want to sound like a greetings card, but every mistake is also an opportunity to learn, and this is where the metaphorical rubber meets the road. The aim of performing a root cause analysis (RCA) after an outage is not simply to determine what happened; it should also be to look at how that same mistake can be avoided in the future. Without the latter, there’s no point in performing the RCA in the first place, in my opinion.

Finding Root Cause

When looking for a root cause, I go beyond simply the action that caused the outage. I as questions like:

  • Was there a process failure which allowed this to happen (or did somebody break a process which would have prevented the issue)?
  • How quickly was the issue discovered? Why did it take that long?
  • Were there extenders, i.e. did something happen (or not happen) which meant that the outage continued for longer than it needed to?
  • What testing was being done during/after the change, and did it catch the error? If not, why not? i.e. Were there what we now know are holes in the test plan.

One of the comments during this episode was along the lines that and outage can occur, and then steps are taken to make sure that particular outage path can’t happen again, but it’s almost pointless because the next outage will inevitably be something else unexpected. The implication seemed to be that making changes to avoid a repeat incident was somehow pointless. I respectfully disagree. The first time a mistake happens, it’s a mistake. If the same mistake happens again because I didn’t take steps to prevent it, then it’s not a mistake any more, it’s a known, unresolved problem.

As a corollary to that, if an engineer makes the same mistake repeatedly, perhaps this career is not for them.

All Aboard The Blunder Bus

In response to this first episode, in a future post I will share one of my own epic blunders and analyze the lessons to be learned from what happened.

The Network Collective

The Network Collective looks like it should be an interesting project to follow, and I would recommend subscribing. I love hearing tales from the real world, and next weeks’ recording of Episode 2 (Choosing a Routing Protocol) features the ubiquitous Russ White. What’s not to love?

If you liked this post, please do click through to the source at Response: The Network Collective, Episode 1 and give me a share/like. Thank you!

by John Herbert at April 18, 2017 01:54 PM

ipSpace.net Blog (Ivan Pepelnjak)

Automate Everything: ipSpace.net Is Coming Back to US

After the last US-based ipSpace.net workshop a lot of people asked me about the next one. It took a long time, but here it is: I’m running an on-site automation workshop together with several friends with outstanding hands-on experience in Colorado in late May.

Read more ...

by Ivan Pepelnjak (noreply@blogger.com) at April 18, 2017 11:09 AM

April 17, 2017

My Etherealmind

CPU Failures Hurt Intel’s Bottom Line

Unsurprising, the failure of Intel Atom C2000 is costing money

The post CPU Failures Hurt Intel’s Bottom Line appeared first on EtherealMind.

by Greg Ferro at April 17, 2017 05:18 PM

Network Design and Architecture

April Online CCDE Class is going to start today

I am excited as today, 2017 CCDE April Online (Webex) class is going to start. Actually , there is only half an hour and we will start. Every day will be 4 hours and minimum 11 days it will take. We will go through the theory , best practices and the case studies for many […]

The post April Online CCDE Class is going to start today appeared first on Cisco Network Design and Architecture | CCDE Bootcamp | orhanergun.net.

by Orhan Ergun at April 17, 2017 03:23 PM

XKCD Comics

April 14, 2017

My Etherealmind
ipSpace.net Blog (Ivan Pepelnjak)

Programmable ASICs on Software Gone Wild

During Cisco Live Europe 2017 (where I got thanks to the Tech Field Day crew kindly inviting me) I had a nice chat with Peter Jones, principal engineer @ Cisco Systems. We started with a totally tangential discussion on why startups fail, and quickly got back to flexible hardware and why one would want to have it in a switch.

Read more ...

by Ivan Pepelnjak (noreply@blogger.com) at April 14, 2017 06:24 AM

XKCD Comics

April 13, 2017

My Etherealmind
The Networking Nerd

Changing The Baby With The Bathwater In IT

If you’re sitting in a presentation about the “new IT”, there’s bound to be a guest speaker talking about their digital transformation or service provider shift in their organization. You can see this coming. It’s a polished speaker, usually a CIO or VP. They talk about how, with the help of the vendor on stage with them, they were able to rapidly transform their infrastructure into something modern while at the same time changing processes to accommodate faster IT response, more productive workers, and increase revenue or transform IT from a cost center to a profit center. The key components are simple:

  1. Buy new infrastructure from $vendor
  2. Transform all processes to be more agile, productive, and better.

Why do those things always happen in concert?

Spring Cleaning

Infrastructure grows old. That’s a fact of life. Outside of some very specialized hardware, no one is using the same desktop they had ten years ago. No enterprise is still running Windows 2000 server on an IBM NetFinity server. No one is still using 10Mbps Ethernet over Thinnet to connect their offices. Hardware marches on. So when we buy new things, we as technology professionals need to find a way to integrate them into our existing technology stack.

Processes, on the other hand, are very slow to change. I can remember dealing with process issues when I was an intern for IBM many, many years ago. The process we had for deploying a new workstation had many, many reboots involved. The deployment team worked out a new strategy to streamline deployments and make things run faster. We brought our plan to the head of deployments. From there, we had to:

  • Run tests to prove that it was faster
  • Verify that the process wasn’t compromised in any way
  • Type up new procedures in formal language to match the existing docs
  • Then submit them for ISO approval

And when all those conditions were met, we could finally start using our process. All in all, with aggressive testing, it still took two months.

Processes are things that are thought to be carved in stone, never to be modified or changed in any way for the rest of time. Unless the stones break or something major causes a process change. Usually, that major change is a whole truckload of new equipment showing up on the back dock attached to a consultant telling IT there is a better way (TM) to do things.

Ceteris Paribus

Ceteris Paribus is a latin term that means “all else unchanged”. We use it when we talk about having multiple variables in an equation and the need to keep them constant to be able to measure changes appropriately.

The funny thing about all these transformations is that it’s hard to track what actually made improvements when you’re changing so many things at once. If the new hardware is three or four times faster than your old equipment, would it show that much improvement if you just used your old software and processes on it? How much faster could your workloads execute with new CPUs and memory management techniques? How about collapsing your virtual infrastructure onto fewer and fewer physical servers because of advances there? Running old processes on new hardware can give you a very good idea of how good the hardware is. Does it meet the criteria for selection that you wanted when it was purchased? Or, better still, does it seems like you’re not getting the performance you paid for?

Likewise, how are you able to know for sure that the organization and process changes you implemented actually did anything? If you’re implementing them on new hardware how can you capture the impact? There’s no rule that says that new processes can only be implemented on new shiny hardware. Take a look at what Walmart is doing with OpenStack. They most certainly aren’t rushing out to buy tons and tons of new servers just for OpenStack integration. Instead, they are taking streamlined processes and implementing them on existing infrastructure to see the benefits. Then it’s easy to measure and say how much hardware you need to expand instead of overbuying for the process changes you make.

Tom’s Take

So, why do these two changes always seem to track with each other? The optimist in me wants to believe that it’s people deciding to make positive changes all at once to pull their organization into the future. Since any installation is disruptive, it’s better to take the huge disruption and retrain for the massive benefits down the road. It’s a rosy picture indeed.

The pessimist in me wonders if all these massive changes aren’t somehow tied to the fact that they always come with massive new hardware purchases from vendors. I would hope there isn’t someone behind the scenes with the ear of the CIO pushing massive changes in organization and processes for the sake of numbers. I would also sincerely hope that the idea isn’t to make huge organizational disruptions for the sake of “reducing overhead” or “helping tell the world your story” or, worse yet, “making our product look good because you did such a great job with all these changes”.

The optimist in me is hoping for the best. But the pessimist in me wonders if reality is a bit less rosy.

by networkingnerd at April 13, 2017 03:33 PM

Router Jockey

PNDA provides scalable and reactive network analytics

PNDADuring Networking Field Day 15 our friends from the Linux Foundation, including Lisa Caywood, briefed us on a recent “acquisition” from Cisco. PNDA (Panda) is an open source Platform for Network Data Analytics, which aggregates data from multiple sources on a network including, real time performance indicators, logs, network telemetry, and other useful metrics… then in combination with Apache Spark, the data is analyzed to find useful patterns. None of this should be confused with Cisco’s recent announcement of the Tetration analytics platform. Tetration is a data center focused solution focused on a very particular space, where PNDA is more of a horizontally focused platform that is cross-vendor and cross-dataset. But this project is in no way a fork of the Cisco Tetration product as they evolved from completely separate code bases. Because PNDA is an open source initiative, it is able to take advantage of many existing projects, like Apache Spark, to build a robust analytics platform. Because of this, it allows them to remain extremely flexible.  While PNDA’s focus is solely on network, but there are other projects out there that are utilizing it as a jumping off point to perform analytics on other data. Think of the project as the glue that is able to utilize independent projects to become something whole, which is greater than the sum of it’s parts. The project strives to deliver processed data to downstream applications, where it can then be evaluated. It does this using Apache’s Kafka and Zookeeper applications to distribute high velocity data. Kafka consumer applications can consume this data, or you can create your own toolchain using data processing applications, and then dump it into a Hadoop cluster, or return it to the Kafka ecosystem.


PNDA Open Source Ecosystem

Use Cases

Some of the key market space for PNDA is currently in a few specific use-cases. Digesting large amount of data from a CMTS cable plant, or reading sensor data from an entire city’s network infrastructure devices. The GiLAN project enables real-time service assurance to ISPs using PNDA as a input for all syslog, and SNMP traffic, they can pull this data into Logstash, feed it to Kafka, and then process it using Moogsoft’s Incident.Moog data analysis and presentation software. This data can also be consumed by other applications like Ontology’s Real-Time Inventory platform. This enables the ISP’s NOC to respond to real-time fault’s in the infrastructure. All of this data being cross referenced and analyzed can look for trends which can even predict future service impacting issues so they can be fixed before they cause an outage.


<figure class="wp-caption aligncenter" id="attachment_6287" style="width: 960px">CMTS Predictive Service Management<figcaption class="wp-caption-text">CMTS Predictive Service Management</figcaption></figure>

PNDA Data Assurance

One of the really impressive features for this Open Source software is it’s ability to provide end-to-end assurance on the data not only being accepted, but also processed and stored through every last bit of the ecosystem. The bits of green you see in this console obviously indicate that things are going well within the PNDA infrastructure. But what really impresses me is that PNDA is constantly verifying the ecosystem by sending test data, and ensuring that the result is indeed what was expected of the platform. Another great feature that is able to be incorporated that is missing from the upstream products that it leverages.


<figure class="wp-caption aligncenter" id="attachment_6290" style="width: 800px">PNDA Console<figcaption class="wp-caption-text">PNDA Console</figcaption></figure>



All in all the Linux Foundation has been working on some rather interesting initiatives including DPDK, IoTivity, ONAP, Let’s Encrypt, Open vSwitch, Open Daylight, OPNFV, and Prometheus! With PNDA joining this list it’s easy to see that the Open Source initiative is alive and well in the networking space. — If any of my ramblings here interest you, please take the time to watch the presentation below from Networking Field Day 15 on PNDA.

<iframe allowfullscreen="allowfullscreen" frameborder="0" height="360" src="https://player.vimeo.com/video/212795551?title=0&amp;byline=0&amp;portrait=0" style="display:block; margin:0 auto;" width="640"></iframe>

Tech Field Day Disclaimer

Tech Field Day is made possible by the sponsors who are footing the bill for the travel and living expenses of delegates such as myself. Sponsors should understand that their financing of Tech Field Day in no way guarantees them any bias from the delegates and that they are only there to provide their honest and direct opinions of the solutions they present. For my full disclaimer, click here.

The post PNDA provides scalable and reactive network analytics appeared first on Router Jockey.

by Tony Mattke at April 13, 2017 02:30 PM

Networking Now (Juniper Blog)

Juniper Networks Security Issues & Predictions (for 2017)



Recent focus in cybersecurity has been how to remain ahead of advanced attacks. Whilst this is important, 2016 proved that many organisations had missed fundamental security controls with ransomware seeping through email gateways, weak passwords in use on critical systems, users able to access data, files and systems across their internal networks, out of date security software, poor patch management controls, low use of encryption with data being stored in clear text – the list goes on and on. Why?


This series of articles will go into detail on network issues and predictions which we see on the horizon for the coming year. Please read on for a high-level overview of what you can look forward to.


If you enjoyed reading this blog and would like to read related security blogs please visit here

by lpitt at April 13, 2017 01:49 PM

How the Fast Evolution of Stealthy Malware Requires a Rethink of Security


Stealth – the art of remaining hidden - has been a force of nature since before the dawn of mankind. Long before we were standing upright on the Savannah, nature had already figured out that one great way of staying alive was to remain silent, hidden out of sight and with the wind in your face as you watch your prey. As in nature, the art of remaining hidden continues to evolve for the cybercriminal, as well.

by lfisher at April 13, 2017 10:49 AM