September 16, 2021

Potaroo blog

Regulating Big Tech. This Time, for sure!

There is a growing unease within the US and elsewhere over the extraordinary rise of these technology giants, not just in monetary terms but in terms of their social power as well. There is a growing sentiment that the current situation looks like it will never be adequately corrected by just competitive pressures within market itself. Some further forms of regulatory intervention will be needed to force a fundamental realignment of these players. In so doing, it appears that regulators appear to be finally catching up with the online world in the US, in Europe and in China.

September 16, 2021 06:00 PM Blog (Ivan Pepelnjak)

LSA/LSP Flooding in OSPF and IS-IS

Peter Paluch loves blogging in microchunks on Twitter ;) This time, he described the differences between OSPF and IS-IS, and gracefully allowed me to repost the explanation in a more traditional format.

My friends, I happen to have a different opinion. It will take a while to explain it and I will have to seemingly go off on a tangent. Please have patience. As a teaser, though: The 2Way state between DRothers does not improve flooding efficiency – in fact, it worsens it.

September 16, 2021 07:17 AM

September 15, 2021 Blog (Ivan Pepelnjak)

New: Design Clinic

In early September, I started yet another project that’s been on the back burner for over a year: Design Clinic (aka Ask Me Anything Reasonable in a more structured format). Instead of collecting questions and answering them in a podcast (example: Deep Questions podcast), I decided to make it more interactive with a live audience and real-time discussions. I also wanted to keep it valuable to anyone interested in watching the recordings, so we won’t discuss obscure failures of broken designs or dirty tricks that should have remained in CCIE lab exams.

September 15, 2021 07:32 AM

XKCD Comics

September 14, 2021

My Etherealmind Blog (Ivan Pepelnjak)

Stateful Switchover (SSO) 101

Stateful Switchover (SSO) is another seemingly awesome technology that can help you implement high availability when facing a broken non-redundant network design. Here’s how it’s supposed to work:

  • A network device runs two copies of the control plane (primary and backup);
  • Primary control plane continuously synchronizes its state with the backup control plane;
  • When the primary control plane crashes, the backup control plane already has all the required state and is ready to take over in moments.

Delighted? You might be disappointed once you start digging into the details.

September 14, 2021 06:51 AM

September 13, 2021 Blog (Ivan Pepelnjak)

Configuring NSX-T Firewall with a CI/CD Pipeline

Initial implementation of Noël Boulene’s automated provisioning of NSX-T distributed firewall rules changed NSX-T firewall configuration based on Terraform configuration files. To make the deployment fully automated he went a step further and added a full-blown CI/CD pipeline using GitHub Actions and Terraform Cloud.

Not everyone is as lucky as Noël – developers in his organization already use GitHub and Terraform Cloud, making his choices totally frictionless.

September 13, 2021 06:20 AM

Potaroo blog

TLS with a side of DANE

These are some notes I took from the DNS OARC meeting held in September 2021. This was a short virtual meeting, but for those of us missing a fix of heavy-duty DNS, it was very welcome in any case!

September 13, 2021 05:00 AM

XKCD Comics

September 11, 2021 Blog (Ivan Pepelnjak)

Worth Reading: Ops Questions in Software Engineering Interviews

Charity Majors published another must-read article: why every software engineering interview should include ops questions. Just a quick teaser:

The only way to unwind this is to reset expectations, and make it clear that:

  • You are still responsible for your code after it’s been deployed to production, and
  • Operational excellence is everyone’s job.

Adhering to these simple principles would remove an enormous amount of complexity from typical enterprise IT infrastructure… but I’m afraid it’s not going to happen anytime soon.

September 11, 2021 06:49 AM

September 10, 2021

The Networking Nerd

Fast Friday – Podcasts Galore!

<figure class="wp-block-image size-large"></figure>

It’s been a hectic week and I realized that I haven’t had a chance to share some of the latest stuff that I’ve been working on outside of Tech Field Day. I’ve been a guest on a couple of recent podcasts that I loved.

Art of Network Engineering

I was happy to be a guest on Episode 57 of the Art of Network Engineering podcast. AJ Murray invited me to take part with all the amazing co-hosts. We talked about some fun stuff including my CCIE study attempts, my journey through technology, and my role at Tech Field Day and how it came to be that I went from being a network engineer to an event lead.

The interplay between the hosts and I during the discussion was great. I felt like we probably could have gone another hour if we really wanted to. You should definitely take a listen and learn how I kept getting my butt kicked by the CCIE open-ended questions or what it’s like to be a technical person on a non-technical briefing.

IPv6, Wireless, and the Buzz

I love being able to record episodes of Tomversations on Youtube. One of my latest was all about IPv6 and Wi-Fi 6E. As soon as I hit the button to publish the episode I knew I was going to get a call from my friends over at the IPv6 Buzz podcast. Sure enough, I was able to record an episode talking to them all about how the parallels between the two technologies are similar in my mind.

What I love about this podcast is that these are the experts when it comes to IPv6. Ed and Tom and Scott are the people that I would talk to about IPv6 any day of the week. And having them challenge my assertions about what I’m seeing helps me understand the other side of the coin. Maybe the two aren’t as close as I might have thought at first but I promise you that the discussion is well worth your time.

Tom’s Take

I don’t have a regular podcast aside from Tomversations so I’m not as practiced in the art of discussion as the people above. Make sure you check out those episodes but also make sure to subscribe to the whole thing because you’re going to love all the episodes they record.

by networkingnerd at September 10, 2021 09:43 PM Blog (Ivan Pepelnjak)

Lessons Learned: Fundamentals Haven't Changed

Here’s another bitter pill to swallow if you desperately want to believe in the magic powers of unicorn dust: laws of physics and networking fundamentals haven’t changed (see also: RFC 1925 Rule 11).

Whenever someone is promising a miracle solution, it’s probably due to them working in marketing or having no clue what they’re talking about (or both)… or it might be another case of adding another layer of abstraction and pretending the problems disappeared because you can’t see them anymore.

You’ll need a Free Subscription to watch the video.

September 10, 2021 07:10 AM

XKCD Comics

September 09, 2021

Honest Networker Blog (Ivan Pepelnjak)

netsim-tools Overview

In December 2020, I got sick-and-tired of handcrafting Vagrantfiles and decided to write a tool that would, given a target networking lab topology in a text file, produce the corresponding Vagrantfile for my favorite environment (libvirt on Ubuntu). Nine months later, that idea turned into a pretty comprehensive tool targeting networking engineers who like to work with CLI and text-based configuration files. If you happen to be of the GUI/mouse persuasion, please stop reading; this tool is not for you.

During those nine months, I slowly addressed most of the challenges I always had creating networking labs. Here’s how I would typically approach testing a novel technology or software feature:

September 09, 2021 07:16 AM

September 08, 2021

Packet Pushers

Book Review: ‘Project Hail Mary’ by Andy Weir

Project Hail Mary is the latest work of fiction from Andy Weir, best known for his debut novel The Martian. And just like in The Martian, the protagonist’s survival in this new book depends on his ability to solve problems, troubleshoot mishaps, and generally “science the sh*t” out of things. Project Hail Mary a text-based […]

The post Book Review: ‘Project Hail Mary’ by Andy Weir appeared first on Packet Pushers.

by Drew Conry-Murray at September 08, 2021 07:49 PM Blog (Ivan Pepelnjak)

Open-Source DMVPN Alternatives

When I started collecting topics for the September 2021 Design Clinic one of the subscribers sent me an interesting challenge: are there any open-source alternatives to Cisco’s DMVPN?

I had no idea and posted the question on Twitter, resulting in numerous responses pointing to a half-dozen alternatives. Thanks a million to @MarcelWiget, @FlorianHeigl1, @PacketGeekNet, @DubbelDelta, @Tomm3h, @Joy, @RoganDawes, @Yassers_za, @MeNotYouSharp, @Arko95, @DavidThurm and several others who chimed in with additional information.

Here’s what I learned:

September 08, 2021 07:22 AM

XKCD Comics

September 07, 2021 Blog (Ivan Pepelnjak)

Non-Stop Forwarding 101

Non-Stop Forwarding (NSF) is one of those ideas that look great in a slide deck and marketing collaterals, but might turn into a giant can of worms once you try to implement them properly (see also: stackable switches or VMware Fault Tolerance).

NSF has been around for at least 15 years, so I’m positive at least some vendors got most of the details right; I’m also pretty sure a few people have scars to prove they’ve been around the non-optimal implementations.

September 07, 2021 06:47 AM

September 06, 2021 Blog (Ivan Pepelnjak)

Comparing Forwarding Performance of Data Center Switches

One of my subscribers is trying to decide whether to buy an -EX or an -FX version of a Cisco Nexus data center switch:

I was comparing Cisco Nexus 93180YC-FX and Nexus 93180YC-EX. They have the same port distribution (48x 10/25G + 6x40/100G), 3.6 Tbps switching capacity, but the -FX version has just 1200 Mpps forwarding rate while EX version goes up to 2600 Mpps. What could be the reason for the difference in forwarding performance?

Both switches are single-ASIC switches. They have the same total switching bandwidth, thus it must take longer for the FX switch to forward a packet, resulting in reduced packet-per-seconds figure. It looks like the ASIC in the -FX switch is configured in more complex way: more functionality results in more complexity which results in either reduced performance or higher cost.

September 06, 2021 06:34 AM

XKCD Comics

September 03, 2021

The Networking Nerd

Getting Blasted by Backdoors

<figure class="wp-block-image size-large"><figcaption>Open Door from</figcaption></figure>

I wanted to take minute to talk about a story I’ve been following that’s had some new developments this week. You may have seen an article talking about a backdoor in Juniper equipment that caused some issues. The issue at hand is complicated at the linked article does a good job of explaining some of the nuance. Here’s the short version:

  • The NSA develops a version of Dual EC random number generation that includes a pretty substantial flaw.
  • That flaw? If you know the pseudorandom value used to start the process you can figure out the values, which means you can decrypt any traffic that uses the algorithm.
  • NIST proposes the use of Dual EC and makes it a requirement for vendors to be included on future work. Don’t support this one? You don’t get to even be considered.
  • Vendors adopt the standard per the requirement but don’t make it the default for some pretty obvious reasons.
  • Netscreen, a part of Juniper, does use Dual EC as part of their default setup.
  • The Chinese APT 5 hacking group figures out the vulnerability and breaks into Juniper to add code to Netscreen’s OS.
  • They use their own seed value, which allows them to decrypt packets being encrypted through the firewall.
  • Hilarity does not ensue and we spend the better part of a decade figuring out what has happened.

That any of this even came to light is impressive considering the government agencies involved have stonewalled reporters and it took a probe from a US Senator, Ron Wyden, to get as far as we have in the investigation.

Protecting Your Platform

My readers know that I’m a pretty staunch advocate for not weakening encryption. Backdoors and “special” keys for organizations that claim they need them are a horrible idea. The safest lock is one that can’t be bypassed. The best secret is one that no one knows about. Likewise, the best encryption algorithms are ones that can’t be reversed or calculated by anyone other than the people using them to send messages.

I get that the flood of encrypted communications today is making life difficult for law enforcement agencies all over the world. It’s tempting to make it a requirement to allow them a special code that will decrypt messages to keep us safe and secure. That’s the messaging I see every time a politician wants to compel a security company to create a vulnerability in their software just for them. It’s all about being safe.

Once you create that special key you’ve already lost. As we saw above, the intentions of creating a backdoor into an OS so that we could spy on other people using it backfired spectacularly. Once someone else figured out that you could guess the values and decrypt the traffic they set about doing it for themselves. I can only imagine the surprise at the NSA when they realized that someone had changed the values in the OS and that, while they themselves were no longer able to spy with impunity, someone else could be decrypting their communications at that very moment. If you make a key for a lock someone will figure out how to make a copy. It’s that simple.

We focus so much on the responsible use of these backdoors that we miss the bigger picture. Sure, maybe we can make it extremely difficult for someone in law enforcement to get the information needed to access the backdoor in the name of national security. But what about other nations? What about actors not tied to a political process or bound by oversight from the populace. I’m more scared that someone that actively wishes to do me harm could find a way to exploit something that I was told had to be there for my own safety.

The Juniper story gets worse the more we read into it but they were only the unlucky dancer with a musical chair to slip into when the music stopped. Any one of the other companies that were compelled to include Dual EC by government order could have gotten the short straw here. It’s one thing to create a known-bad version of software and hope that someone installs it. It’s an entirely different matter to force people to include it. I’m honestly shocked the government didn’t try to mandate that it must be used exclusively of other algorithms. In some other timeline Cisco or Palo Alto or even Fortinet are having very bad days unwinding what happened.

Tom’s Take

The easiest way to avoid having your software exploited is not to create your own exploit for it. Bugs happen. Strange things occur in development. Even the most powerful algorithms must eventually yield to Moore’s Law or Shor’s Algorithm. Why accelerate the process by cutting a master key? Why weaken yourself on purpose by repeating over and over again that this is “for the greater good”? Remember that the greater good may not include people that want the best for you. If you’re wiling to hand them a key to unlock the chaos that we’re seeing in this case then you have overestimated your value to the process and become the very bad actor you hoped to stop.

by networkingnerd at September 03, 2021 06:35 PM Blog (Ivan Pepelnjak)

Video: Introduction to Network Addressing

A friend of mine pointed out this quote by John Shoch when I started preparing the Network Stack Addressing slide deck for my How Networks Really Work webinar:

The name of a resource indicates what we seek, an address indicates where it is, and a route tells us how to get there.

You might wonder when that document was written… it’s from January 1978. They got it absolutely right 42 years ago, and we completely messed it up in the meantime with the crazy ideas of making IP addresses resource identifiers.

September 03, 2021 07:28 AM

XKCD Comics

September 02, 2021 Blog (Ivan Pepelnjak)

Automating NSX-T Firewall Configuration

Noël Boulene decided to automate provisioning of NSX-T distributed firewall rules as part of his Building Network Automation Solutions hands-on work.

What makes his solution even more interesting is the choice of automation tool: instead of using the universal automation hammer (aka Ansible) he used Terraform, a much better choice if you want to automate service provisioning, and you happen to be using vendors that invested time into writing Terraform provisioners.

September 02, 2021 07:14 AM

September 01, 2021 Blog (Ivan Pepelnjak)

netsim-tools: Python Package and Unified CLI

One of the major challenges of using netsim-tools was the installation process – pull the code from GitHub, install the prerequisites, set up search paths… I knew how to fix it (turn the whole thing into a Python package) but I was always too busy to open that enormous can of worms.

That omission got fixed in summer 2021; netsim-tools is now available on PyPI and installed with pip3 install netsim-tools.

September 01, 2021 06:07 AM

XKCD Comics

August 31, 2021

Ethan Banks on Technology

The Best Technologists First Try To Solve Their Own Problems

Every once in a while, I get questions from random internet folks who want me to do their homework for them. They want me to provide them with detailed technical information, solve their complex design problem, or curate content on a difficult topic so that they don’t have to do the sifting.

While I like to help folks out as much as anyone (and often do), I usually ignore these sorts of questions. Why? Partly, I don’t have enough time to fix the internet. Partly, I like to get paid for consulting. But more importantly, the best technologists first try to solve their own problems.

A Manager’s Perspective

When interviewing candidates for technical positions, one of my questions is, “If you run into a problem you’ve never faced before, how do you solve it?” There are two typical answers.

  1. “I’ll ask someone else for help. Probably you.”
  2. “I’ll search the internet, company wiki, and product documentation. I’ll set up a lab. If I’m still stuck, I’ll ask for help.”

I prefer to hire a person who first tries to figure things out. While I want neither a cowboy nor science experiments making their way into production, I do want a motivated individual who will research difficult technical challenges and grow as a result. As that person grows stronger, their team grows stronger as well.

It’s About The Team

Remember that while managers manage individuals, they also manage teams. Hiring decisions are based partly on how well a candidate will fit in with the established team. I view unmotivated technologists as a red flag for team dynamics.

You might feel that if you worked for me, you’d never be allowed to ask a question. That’s not the case. There’s no shame in asking for help at the appropriate time. Technology is hard, and the problems one faces change over time–domain-specific knowledge ages out.

Sometimes a situation is urgent, and you won’t have time to figure out for yourself why { the network is down | the server is offline | the CEO can’t login to the VPN }. All technologists need help to solve problems at certain times. Never asking for help can be just as bad as constantly nagging teammates. However, there’s a big difference between immediately leaning on others and being self-sufficient whenever possible.

When you ask another for help without first trying to aid yourself, you have added to that other person’s workload. You’re cutting into the time they have to get their own work done. Instead of contributing to the team, you’re a drag on team performance. When you make no effort to find your own answers, you weaken your team.

But It’s Also About Yourself

You want to be self-sufficient when you can. You’ll both learn & understand more. Self-sufficiency leads to technology mastery. Technology mastery leads to career opportunities. Career opportunities can transform your life.

by Ethan Banks at August 31, 2021 03:38 PM

August 30, 2021 Blog (Ivan Pepelnjak)

Worth Reading: A Historical Perspective On The Usage Of IP Version 9

As early as 1994 (on April 1st, to be precise) a satire disguised as an Informational RFC was published describing the deployment of IPv9 in a parallel universe.

Any similarity with a protocol that started as a second-system academic idea and is still experiencing hiccups in real world even though it could order its own beer in US is purely coincidental.

August 30, 2021 06:52 AM

Potaroo blog

TLS with a side of DANE

Am I really talking to you? In a networked world that’s an important question.

August 30, 2021 02:00 AM

XKCD Comics

August 27, 2021

The Networking Nerd

Sharing Failure as a Learning Model

<figure class="wp-block-image size-large"></figure>

Earlier this week there was a great tweet from my friends over at Juniper Networks about mistakes we’ve made in networking:

<figure class="wp-block-embed is-type-rich is-provider-twitter wp-block-embed-twitter">
<script async="async" charset="utf-8" src=""></script>

It got some interactions with the community, which is always nice, but it got me to thinking about how we solve problems and learn from our mistakes. I feel that we’ve reached a point where we’re learning from the things we’ve screwed up but we’re not passing it along like we used to.

Write It Down For the Future

Part of the reason why I started my blog was to capture ideas that had been floating in my head for a while. Troubleshooting steps or perhaps even ideas that I wanted to make sure I didn’t forget down the line. All of it was important to capture for the sake of posterity. After all, if you didn’t write it down did it even happen?

Along the way I found that the posts that got significant traction on my site were the ones that involved mistakes. Something I’d done that caused an issue or something I needed to look up through a lot of sources that I distilled down into an easy reference. These kinds of posts are the ones that fly right up to the top of the Google search results. They are how people know you. It could be a terminology post like defining trunks. Or perhaps it’s a question about why your SPF modules are working in a switch.

Once I realized that people loved finding posts that solved problems I made sure to write more of them down. If I found a weird error message I made sure to figure out what it was and then put it up for everyone to find. When I documented weird behaviors of BPDUGuard and BPDUFilter that didn’t match the documentation I wrote it all down, including how I’d made a mistake in the way that I interpreted things. It was just part of the experience for me. Documenting my failures and my learning process could help someone in the future. My hope was that someone in the future would find my post and learn from it like I had.

Chit Chat Channels

It used to be that when you Googled error messages you got lots of results from forum sites or Reddit or other blogs detailing what went wrong and how you fixed it. I assume that is because, just like me, people were doing their research and figuring out what went wrong and then documenting the process. Today I feel like a lot of that type of conversation is missing. I know it can’t have gone away permanently because all networking engineerings make mistakes and solve problems and someone has to know where that went, right?

The answer came to me when I read a Reddit post about networking message boards. The suggestions in the comments weren’t about places to go to learn more. Instead, they linked to Slack channels or Discord servers where people talk about networking. That answer made me realize why the discourse around problem solving and learning from mistakes seems to have vanished.

Slack and Discord are great tools for communication. They’re also very private. I’m not talking about gatekeeping or restrictions on joining. I’m talking about the fact that the conversations that happen there don’t get posted anywhere else. You can join, ask about a problem, get advice, try it, see it fail, try something else, and succeed all without ever documenting a thing. Once you solve the problem you don’t have a paper trail of all the things you tried that didn’t work. You just have the best solution that you did and that’s that.

You know what you can’t do with Slack and Discord? Search them through Google. The logs are private. The free tiers remove messages after a fashion. All that knowledge disappears into thin air. Unlike the Wisdom of the Ancients the issues we solve in Slack are gone as soon as you hit your message limit. No one learns from the mistakes because it looks like no one has made them before.

Going the Extra Mile

I’m not advocating for removing Slack and Discord from our daily conversations. Instead, I’m proposing that when we do solve a hard problem or we make a mistake that others might learn from we say something about it somewhere that people can find it. It could be a blog post or a Reddit thread or some kind of indexable site somewhere.

Even the process of taking what you’ve done and consolidating it down into something that makes sense can be helpful. I saw X, tried Y and Z, and ended up doing B because it worked the best of all. Just the process of how you got to B through the other things that didn’t work will go a long way to help others. Yes, it can be a bit humbling and embarrassing to publish something that admits you that you made a mistake. But It’s also part of the way that we learn as humans. If others can see where we went and understand why that path doesn’t lead to a solution then we’ve effectively taught others too.

Tom’s Take

It may be a bit self-serving for me to say that more people need to be blogging about solutions and problems and such, but I feel that we don’t really learn from it unless we internalize it. That means figuring it out and writing it down. Whether it’s a discussion on a podcast or a back-and-forth conversation in Discord we need to find ways to getting the words out into the world so that others can build on what we’ve accomplished. Google can’t search archives that aren’t on the web. If we want to leave a legacy for the DenverCoder10s of the future that means we do the work now of sharing our failures as well as our successes and letting the next generation learn from us.

by networkingnerd at August 27, 2021 07:09 PM

XKCD Comics

August 25, 2021

Ethan Banks on Technology

How IT Pros Learn Online In 2021

I surveyed 53 IT professionals about online IT training in August 2021. Most of the folks I interact with are networking & cloud infrastructure professionals, and the answers reflect that. 53 responses isn’t enough to draw hard and fast conclusions from, but I still believe there are interesting trends & individual comments worth thinking about.

By the way, if you’d like to submit your own responses, I left the survey open. I told Google Forms to not collect email addresses, so your responses are anonymous.

1. Which online learning sites do you have a subscription to or have bought an IT course from?

  1. Udemy – 32
  2. Pluralsight – 24
  3. INE – 19
  4. A Cloud Guru – 16
  5. CBT Nuggets – 12
  6. Coursera – 9
  7. O’Reilly / Safari – 7
  8. ITProTV – 4
  9. LinkedIn Learning / Lynda – 3
  10. Juniper Learning Portal – 2
  11. Pearson – 2
  12. Skillshare – 2
  13. Adrian Cantrill – 1
  14. Cisco Learning Network – 1
  15. Global Knowledge – 1
  16. Ivan Pepelnjak – 1
  17. KBITS – 1
  18. Kirk Byers – 1
  19. Routehub – 1
  20. Skillsoft – 1
  21. TalkPython – 1
  22. Teachable – 1
  23. YouTube – 1

I believe Udemy is so popular because it’s a great platform to discover courses and instructors, and buy what you like a la carte. No subscription is required for students to use Udemy. But Udemy itself is just a marketplace–a platform that does well delivering instructional material, but not creating it. When buying a course through Udemy, your experience will vary as instructor quality varies.

Sites like CBT Nuggets, INE, and Pluralsight offer their material via potentially pricey subscriptions, but the content libraries are large, deeply technical, and taught by experienced experts. You want what they are teaching, and you’re willing to pay for access to that knowledge–or you aren’t.

Large media companies from the world of books aren’t attracting as many students as they once did, despite well-recognized brands. Pearson and O’Reilly stand out to me as struggling to pivot successfully into video training.

Independent creators not tied to a larger brand or content library have fans, too. Adrian Cantrill, Ivan Pepelnjak, KBITS, Kirk Byers, Routehub, and TalkPython are all independent folks. That should encourage instructors who want build their own platform and deliver exactly the experience they want their students to have.

2. Subscription vs. single purchase. Which would you rather?

  • 57.7% – Pay a monthly subscription and have access to everything.
  • 26.9% – Buy one specific course or bundle at a time.
  • 15.4% – It depends.

The folks in the “it depends” camp offered the following to qualify their choice.

  • Depends heavily on pricing. If my learning goals are specific to one area or course, then a subscription may not hold value or my employer may not want to fund it.
  • Yearly fee, especially if employer pays.
  • Ideally I want to be able to go access/redo any course whenever I like, so that tends toward buying one specific thing. But subscription-based services typically give you access to an entire pool of courses. So, tough to say if can only do one.
  • I do both where it makes sense.
  • It depends (as usual)… If the platform has enough peripheral content and a subscription is affordable, then I prefer subscriptions. I have a subscription or Pluralsight for this reason. Others that want too much money for limited content that is not updated frequently, but there is one trainer or course that I want, I would prefer to be able to pay “a la carte”.
  • Depends on the platform offering.

3. Who pays for this training?

  1. 42.3% – I do.
  2. 26.9% – My employer does.
  3. 30.8% – A mix…it depends on the material or provider.

The issue represented here is, I believe, one of price sensitivity. Self-funded learners tend to be more price conscious, while employer-funded learners less so. Corporations see dollar costs in a different way than individuals do. This is a conundrum for independent instructors who want to maximize their income while providing affordable training to folks on a budget trying to keep up their career demands.

4. Why do you train?

I asked folks to choose the one most significant reason that they train.

  1. 41.5% – Lifelong learner! (Because I am intellectually curious.)
  2. 22.6% – Money! (For career opportunities.)
  3. 20.8% – Certs! (To pass certification exams.)
  4. 15.1% – Some combination of the three.

The response here surprised me, as I expected the majority of folks to be focused on certification. But as I’ve considered this, I’ve realized that IT certifications, while still important to the industry, aren’t the meal ticket they once were. Cert exams have lost integrity due to braindumping. Vendor cert programs have become more vendor-specific and less broadly applicable. Both hiring managers and IT professionals are increasingly skeptical about certs as a result.

Many VARs are obligated to care about certifications, because their partnerships with vendors depend on employing certified folks. Non-VAR companies might require a cert to reduce the applicant count a hiring manager has to consider. Screening in this manner is a bad practice as experienced, valuable candidates are filtered out by an algorithm, but is still common practice at some organizations.

Therefore, I can’t say certs don’t have value (especially for junior roles), but I believe the career role certs play continues to change. I see certs & their learning blueprints as a valuable learning path. The credential itself? Not so much without a specific reason to have it…or maybe for the bragging rights. 😉

5. What do you feel is most important for you to learn over the next 1-3 years? Optionally, tell me why.

This was left as an open-ended question, rather than multiple choice. I was curious to hear the answers without my own opinions poisoning the well via my own selection of multiple choice options. Here is a summary of the responses in no particular order.

  • Big 3 public clouds – AWS, Azure, and GCP
  • Networking vendor platforms, products, and certs – Arista, Cisco (ACI, DevNet, wireless, CCDE, CCNP), Dell, Itential, Juniper (including Apstra), Nokia, VMware NSX
  • Open source networking & whitebox switching
  • Networking fundamentals – routing, switching, BGP
  • Containers & container-based infrastructure
  • Kubernetes
  • Python programming
  • Developer-related skills – REST APIs, JSON, GitHub, CI/CD pipelines
  • Infrastructure-as-Code (IaC)
  • Orchestration & automation tools – Ansible, Terraform, Nautobot, Nornir
  • Open source visibility tools – Prometheus, Grafana, ELK stack, TIG stack
  • “Soft” skills – interpersonal relationships, effective communications, work/life balance, time management

As I reviewed the responses, the most repeated themes were AWS, Azure, IaC, GitHub, Python programming, and Ansible. This echoes discussions we’ve been recording in our networking and cloud engineering podcasts at Packet Pushers, and underscores at least two things that are happening in IT organizations.

  1. Public cloud adoption continues to grow. There’s no surprise in that. If there is a nuance worth bringing up, it’s that public cloud is not resulting in the demise of on-premises computing. Instead, public cloud has introduced a new, complex skillset that IT professionals must gain alongside their so-called legacy knowledge.
  2. Infrastructure provisioning and information gathering is becoming automated. IT engineers are treating infrastructure as code, not merely because it’s interesting, but because it’s necessary. Organizations are less and less tolerant of provisioning cycles that take weeks or months. That puts automation, IaC, and the related tooling and skills in the spotlight.

Demand for skills like Python and Ansible show an appetite for IT organizations to roll their own automation. To me, that means network automation is still in its infancy. Many of us that have begun automation of repetitive tasks realize that one-off scripts don’t scale up to a system teams can use jointly without a tremendous amount of planning and execution, plus a dedicated human.

I anticipate that the popularity of network orchestration and automation platforms will grow over time as IT shops outgrow their early attempts with scripting and playbooks. There are many robust entrants in this space, including Itential, Gluware, Anuta, Apstra, NetYCE, and Pliant and several more not leaping to mind. But for now, open source DIY tools provide a quick aspirin for some headache relief.

6. Name things you love or hate the most about your online IT training experience.

This was another open-ended question. I’ve shared the most actionable and enlightening responses, most of which were complaints rather than kudos.


  • I love when there is a lab or simulator to do the commands / API calls.
  • Labs & exercises – because it is easy to read/listen to info but being forced to do exercises cements the knowledge.
  • I love clear, deep-dive examples that enlighten.
  • Generally like the depth that some trainers will go just to explain technologies.


  • Few offer well-organized contents & most of them lack real world examples.
  • Poor visual slides/diagram. No workbook step-by-step.
  • Don’t fill up all the space with words. Some trainers think, “Speaking is training, so the more I speak the better I’m training!”
  • Many courses have tests that are difficult or impossible to skip.
  • No contact with other participants.
  • Hard to make them social learning or group learning. Most of the platforms it is a solo effort. For some topics, that is fine, but for others, not the best way to learn like you do with a team of people working on a real problem in your environment.
  • Searching could be better when looking for a specific technology to learn.
  • High cost – some platforms charge assuming companies are paying. When not an option, this locks you out.
  • Lack of depth into the details that make things actually be usable or starting off with too many assumptions about existing knowledge making it difficult to get started.
  • There is a giant cliff between the “100k foot view” Step 1 and the “Deep in the weeds” step 2. Most training is written for people who already understand the material. Few trainers think about or remember HOW they learned the material in the first place. They’ll forget to define acronyms and what they mean, or place the technology into context when introduced. Since this info is cumulative in the training, I often quit very early on when they move beyond the absolute basics because they haven’t described things smoothly enough.

IT learners want more than lecture–they also want labs & hands-on examples. Learners also seek clear context about what they are learning. I’m reminded of the joke about how to draw an owl. Step one. Draw these two circles. Step two. Draw the rest of the owl. Training courses that assume too much about where the learner is at might lose them if they ramp up too quickly.

The issue of online learning with others is a sticky one. An instructor who facilitates online group learning via Slack, Discord, or a forum should moderate the discussions to keep them free from spam and badly behaved participants. That’s a tradeoff as moderation and interaction takes time that could be otherwise used to create more training material.

7. I wish there were (better) courses available about…

This open-ended question was a follow-up to question 5. I wanted to see what cutting-edge training material folks were looking for, but not finding to their satisfaction. I selected the responses that stuck out to me the most.

  • Open source networking & how hyperscalers design their network
  • Advanced Free Range Routing (FRR)
  • AWS Cloud Development Kit (CDK)
  • Practical, real-world network automation–not just another Ansible course
  • Data modeling as related to IaC
  • OpenID/OAuth2 for infrastructure engineers
  • Git + CI/CD from the ground up, no prerequisite knowledge required.

These topics expose a few interesting problems in the networking training space in particular.

Some topics have low demand, so trainers looking to sell as many copies of a course as possible will be less interested in developing course material for them. Open source networking including FRR fall into this category.

This becomes a chicken-and-egg problem. Perhaps a company wants to adopt open source networking, but decides against it because there’s not enough decent training material. But there’s not enough decent training material because there aren’t enough adopters to make it compelling for an instructor to build the material.

This situation places the burden on OSS project maintainers to have outstanding documentation, which some do. But unless there’s enough people to spread the work around, OSS project leaders might have to choose between good docs and new features. I’ve talked with enough OSS folks to know that they’d rather be working on features if they have to make a choice. Almost every OSS leader I’ve interviewed has called for more people to contribute to their projects as documentation writers.

What’s a potential instructor to do? I think the answer comes in differentiation. An instructor teaching about a niche topic trades short-term sales volume for the benefit of establishing themselves as an expert in a hopefully growing field. Being first to market has advantages that can pay off in the long-term.

Other topics mentioned relate to the challenge infrastructure folks face transitioning to IaC. For instance, API authentication is a big topic for infrastructure folks. OAuth2 is a common roadblock to run into for the uninitiated. It’s a miserable protocol to get your head around when you’re used to old-school user/pass credentials or simple tokens. Git is a great mystery at first, and GitHub even more intimidating as you don’t want to get it wrong when collaborating. How to apply CI/CD to infrastructure isn’t always obvious, and can at first feel like busy work without an obvious benefit.

IaC is nascent. The tools are evolving rapidly. The techniques keep changing. Not that many people have a good handle on how to do it well. That means there’s just not much training material, because there’s not a paved path strewn with roses we can all follow on our way to programmatic nirvana. There’s room not only for training how to use a tool, but also for a philosophy of tool usage–the design and architecture of IaC systems management.

8. Anything else you want to share with me about online IT training?

This was my closing survey question, meant as a catch-all to gather whatever else was on folks’ minds that they were willing to share. A few especially noteworthy responses grabbed my attention. As they were diverse and harder to generalize about, I’ll analyze each in turn.

  • Some of the YouTube and podcast-based learning I’ve gone through which has a Slack/Discord/etc so I can ask questions and clarify have been way better than $500 courses of watching videos with no check or resource if something is confusing.

While I’ve pointed out that moderation of group chats is a possible concern, I can’t deny the value of a group discussion. For the instructor, there’s the opportunity to learn where students are being tripped up and improve their content. There’s also the benefit of fellow students helping each other out, so it’s not always a unicast between instructor and student.

  • Creating blogs and videos as I learn help me both share and solidify my understanding.

In my opinion, there is no greater way to learn a complex topic than to write about it well enough that someone else can understand it.

  • At times I feel overwhelmed with learning in networking and feel the burnout hard. Anyone else feel the learning burnout?

Yes. I feel it, and especially sympathize with anyone coming into the networking field fresh. The fundamentals are hard enough to learn, let alone all the overlays, policy controllers, automation tools, and proprietary vendor magic we’ve bolted onto the side. Add the unusual guardrails cloud service providers add to your packets, and learning burnout is almost the only possible outcome for a network engineer.

  • It needs to be to the point, less rambling and more actual knowledge sharing.

This point about “getting to the point” came up often enough throughout the survey that I felt it was worth including. Some instructors are great talkers, but have no economy of words. More isn’t always better for some students, though.

  • Keeping material fresh to latest product models and software release versions is a key value position for training material (not every training video is good for all time).

This is another point that popped up throughout the survey, and it’s a tough one for instructors to hear. Much technical training material needs to be updated regularly, or it slowly loses relevance as the industry changes. For me, the takeaway is to keep modules short and tightly focused. It’s easier to update a 3 minute video than a 30 minute lecture.

by Ethan Banks at August 25, 2021 09:32 PM

Packet Pushers

How To Leave Work At 5 PM: Visibility, Event Management, And Automation

Opmantek’s network management platform provides network visibility, flexible event management, and powerful automation. The software streamlines workflows and lets network engineers and operators accomplish more work with fewer distractions, allowing them to go home on time.

The post How To Leave Work At 5 PM: Visibility, Event Management, And Automation appeared first on Packet Pushers.

by Sponsored Blog Posts at August 25, 2021 06:35 PM

Low Code Network Automation That Works Out Of The Box – A Packet Pushers Livestream Event

Learn how Gluware's low code network automation software delivers automation wins out of the box, and helps you grow into infrastructure as code. Join us for a sponsored Livestream event with the Packet Pushers on September 28th.

The post Low Code Network Automation That Works Out Of The Box – A Packet Pushers Livestream Event appeared first on Packet Pushers.

by Drew Conry-Murray at August 25, 2021 03:22 PM

XKCD Comics

August 23, 2021

Packet Pushers

The Difference Between IaC & Config Management with Kris Nóva – Video

In this video clip from the Day Two Cloud podcast, Kris Nóva shares her take on how treating infrastructure-as-code (IaC) is different from configuration management. Check out the full podcast episode here. You can subscribe to the Packet Pushers’ YouTube channel for more videos as they are published. It’s a diverse a mix of content […]

The post The Difference Between IaC & Config Management with Kris Nóva – Video appeared first on Packet Pushers.

by The Video Delivery at August 23, 2021 04:00 PM

Marvell Acquires ASIC Maker Innovium For $1.1 Billion – Video

Chipmaker Marvell will acquire ASIC maker Innovium for $1.1 billion in an all-stock transaction. Innovium’s Teralynx ASICs, which range from 2 to 25.6 Tbs of throughput, competes with Broadcom and Intel/Barefoot in the Ethernet switching market. Link: Marvell to Acquire Innovium – Accelerates Cloud Growth with Expanded Ethernet Switching Portfolio – PR Newswire Listen to […]

The post Marvell Acquires ASIC Maker Innovium For $1.1 Billion – Video appeared first on Packet Pushers.

by The Video Delivery at August 23, 2021 01:38 PM Blog (Ivan Pepelnjak)

Worth Reading: Simplifying Networks

Justin Pietsch wrote another fantastic blog post, this time describing how they simplified Amazon’s internal network, got rid of large-scale VLANs and multi-NIC hosts, moved load balancing functionality into a proxy layer managed by application teams, and finally introduced merchant silicon routers.

August 23, 2021 06:48 AM

XKCD Comics

August 20, 2021

The Networking Nerd

The Mystery of Known Issues

<figure class="wp-block-image size-large"></figure>

I’ve spent the better part of the last month fighting a transient issue with my home ISP. I thought I had it figure out after a hardware failure at the connection point but it crept back up after I got back from my Philmont trip. I spent a lot of energy upgrading my home equipment firmware and charting the seemingly random timing of the issue. I also called the technical support line and carefully explained what I was seeing and what had been done to work on the problem already.

The responses usually ranged from confused reactions to attempts to reset my cable modem, which never worked. It took several phone calls and lots of repeated explanations before I finally got a different answer from a technician. It turns out there was a known issue with the modem hardware! It’s something they’ve been working on for a few weeks and they’re not entirely sure what the ultimate fix is going to be. So for now I’m going to have to endure the daily resets. But at least I know I’m not going crazy!

Issues for Days

Known issues are a way of life in technology. If you’ve worked with any system for any length of time you’ve seen the list of things that aren’t working or have weird interactions with other things. Given the increasing amount of interactions that we have with systems that are becoming more and more dependent on things it’s a wonder those known issue lists are miles long by now.

Whether it’s a bug or an advisory or a listing of an incompatibility on a site, the nature of all known issues is the same. They are things that don’t work that we can’t fix yet. They could be on a list of issues to resolve or something that may never be able to be fixed. The key is that we know all about them so we can plan around them. Maybe it’s something like a bug in a floating point unit that causes long division calculations to be inaccurate to a certain number of decimal places. If you know what the issue is you know how to either plan around it or use something different. Maybe you don’t calculate to that level of precision. Maybe you do that on a different system with another chip. Whatever the case, you need to know about the issue before you can work around it.

Not all known issues are publicly known. They could involve sensitive information about a system. Perhaps the issue itself is a potential security risk. Most advisories about remote exploits are known issues internally at companies before they are patched. While they aren’t immediately disclosed they are eventually found out when the patch is released or when someone discovers the same issue outside of the company researchers. Putting these kinds of things under an embargo of sorts isn’t always bad if it protects from a wider potential to exploit them. However, the truth must eventually come out or things can’t get resolved.

Knowing the Unknown

What happens when the reasons for not disclosing known problems are less than noble? What if the reasoning behind hiding an issue has more to do with covering up bad decision making or saving face or even keeping investors or customers from fleeing? Welcome to the dark side of disclosure.

When I worked from Gateway 2000 back in the early part of the millennium, we had a particularly nasty known issue in the system. The ultimate root cause was that the capacitors on a series of motherboards were made with poor quality controls or bad components and would swell and eventually explode, causing the system to come to a halt. The symptoms manifested themselves in all manner of strange ways, like race conditions or random software errors. We would sometimes spend hours troubleshooting an unrelated issue only to find out the motherboard was affected with “bad caps”.

The issue was well documented in the tech support database for the affected boards. Once we could determine that it was a capacitor issue it was very easy to get the parts replaced. Getting to that point was the trick, though. Because at the top of the article describing the problem was a big, bold statement:

Do Not Tell The Customer This Is A Known Issue!!!

What? I can’t tell them that their system has an issue that we need to correct before everything pops and shuts it down for good? I can’t even tell them what to look for specifically when we open the case? Have you ever tried to tell a 75-year-old grandmother to look for something “strange” in a computer case? You get all kinds of fun answers!

We ended up getting creative in finding ways to look for those issues and getting them replaced where we could. When I moved on to my next job working for a VAR, I found out some of those same machines had been sold to a customer. I opened the case and found bad capacitors right away. I told my manager and explained the issue and we started getting them replaced under warranty as soon as the first sign of problems happened. After the warranty expired we kept ordering good boards from suppliers until we were able to retire all of those bad machines. If I hadn’t have known about the bad cap issue from my help desk time I never would have known what to look for.

Known issues like these are exactly the kind of thing you need to tell your customers about. It’s something that impacts their computer. It needs to be fixed. Maybe the company didn’t want to have to replace thousands of boards at once. Maybe they didn’t want to have to admit they cut corners when they were buying the parts and now the money they saved is going to haunt them in increased support costs. Whatever the reason it’s not the fault of the customer that the issue is present. They should have the option to get things fixed properly. Hiding what has happened is only going to create stress for the relations between consumer and provider.

Which brings me back to my issue from above. Maybe it wasn’t “known” when I called the first time. But by the third or fourth time I called about the same thing they should have been able to tell me it’s a known problem with this specific behavior and that a fix is coming soon. The solution wasn’t to keep using the first-tier support fixes of resets or transfers to another department. I would have appreciated knowing it was an issue so I didn’t have to spend as much time upgrading and isolating and documenting the hell out of everything just to exclude other issues. After all, my troubleshooting skills haven’t disappeared completely!

Vendors and providers, if you have a known issue you should admit it. Be up front. Honestly will get you far in this world. Tell everyone there’s a problem and you’re working on a fix that you don’t have just yet. It may not make the customer happy at first but they’ll understand a lot more than hiding it for days or weeks while you scramble to fix it without telling anyone. If that customer has more than a basic level of knowledge about systems they’ll probably be able to figure it out anyway and then you’re going to be the one with egg on your face when they tell you all about the problem you don’t want to admit you have.

Tom’s Take

I’ve been on both sides of this fence before in a number of situations. Do we admit we have a known problem and try to get it fixed? Or do we get creative and try to hide it so we don’t have to own up to the uncomfortable questions that get asked about bad development or cutting corners? The answer should always be to own up to things. Make everyone aware of what’s going on and make it right. I’d rather deal with an honest company working hard to make things better than a dishonest vendor that miraculously manages to fix things out of nowhere. An ounce of honestly prevents a pound of bad reputation.

by networkingnerd at August 20, 2021 06:40 PM

XKCD Comics

August 19, 2021

Potaroo blog

Running Code

There was a discussion in a working group session at the recent IETF 111 meeting over a proposal that the working group should require at least two implementations of a draft before the working group would consider the document ready. What's going on here?

August 19, 2021 09:00 PM

August 18, 2021

XKCD Comics

August 17, 2021

My Etherealmind

Musing: Does 802.1x Matter Anymore ?

802.1X has little relevance over time.  As distributed work becomes widespread, laptops and smartphones will not connect to campus or branch networks operated by IT departments. Home broadband, cafes, kids schools, 4G/5g etc. Each user device will have some software mechanism/agent/method to access resources in SaaS, on/off prem DC/cloud and on son There is a market […]

by Greg Ferro at August 17, 2021 12:48 PM

August 16, 2021 Blog (Ivan Pepelnjak)

MUST Read: Operational Security Considerations for IPv6 Networks (RFC 9099)

After almost a decade of bickering and haggling (trust me, I got my scars to prove how the consensus building works), the authors of Operational Security Considerations for IPv6 Networks (many of them dear old friends I haven’t seen for way too long) finally managed to turn a brilliant document into an Informational RFC.

Regardless of whether you already implemented IPv6 in your network or believe it will never be production-ready (alongside other crazy stuff like vaccines) I’d consider this RFC a mandatory reading.

August 16, 2021 07:39 AM

XKCD Comics

August 13, 2021

The Networking Nerd

Slow and Steady and Complete


I was saddened to learn last week that one of my former coworkers passed away unexpectedly. Duane Mersman started at the same time I did at United Systems and we both spent most of our time in the engineering area working on projects. We worked together on so many things that I honestly couldn’t keep count of them if I tried. He’s going to be missed by so many people.

A Hare’s Breadth

Duane was, in many ways, my polar opposite at work. I was the hard-charging young buck that wanted to learn everything there was to know about stuff in about a week and just get my hands dirty trying to break it and learn from my mistakes. If you needed someone to install a phone system next week with zero formal training or learn how iSCSI was supposed to operate based on notes sketched on the back of a cocktail napkin I was your nerd. That meant we could often get things running quickly. It also meant I spent a lot of time trying to figure out why things weren’t working. I left quite a few forehead-shaped dents in data center walls.

Duane was not any of those things. He was deliberate and methodical. He spent so much time researching technology that he knew it backwards and forwards and inside out. He documented everything he did while he was working on it instead of going back after the fact to scribble down some awkward prose from his notes. He triple checked all his settings before he ever implemented them. Duane wouldn’t do anything until he was absolutely sure it was going to work. And even then he checked it again just to be sure.

I used to joke that we were two sides of the same coin. You sent me in to clean things up. Then you sent Duane in to clean up after me. I got in and out quickly but I wasn’t always the most deliberate. Duane would get in behind me and spend time making sure whatever I did was the right way. I honestly felt more comfortable knowing he would ensure whatever I did wasn’t going to break next week.

Turtle Soup

Management knew how to use us both effectively. When the customer was screaming and needed it done right now I was the guy. When you wanted things documented in triplicate Duane was the right man for the job. I can remember him working on a network discovery diagram for a medical client that was so detailed that we ended up framing it as a work of art for the customer. It was something that he was so proud of given the months that he toiled away on it.

In your organization you need to recognize the way that people work and use them effectively. If you have an engineer that just can’t be rushed no matter what you need to find projects for them to work on that can take time to work out correctly. You can’t rush people if they don’t work well that way. Duane had many gears but all of them needed to fit his need to complete every part of every aspect of the project. Likewise, hard chargers like me need to be able to get in and get things done with a minimum of distraction.

Think of it somewhat like an episode of The Witcher. You need a person to get the monsters taken care of but you also need someone to chronicle what happened. Duane was my bard. He documented what we did and made sure that future generations would remember it. He even made sure that I would remember the things that we did later when someone asked a question about it or I stated blaming the idiot that programmed it (spoiler alert: I was that idiot).

Lastly, Duane taught me the value of being a patient teacher. When he was studying to take his CCNP exams he spent a significant amount of time on the SWITCH exam learning the various states of spanning trees. I breezed through it because it mostly made sense to me. When he went through it he lobbed up every example and investigated all the aspects of the settings. He would ask me questions about why something behaved the way it did or how a setting could mess things up. As he asked me what I thought I tried to explain how I saw it. My explanations created more questions. But those questions helped me investigate why things worked the way they did. His need to know all about the protocol made me understand it at a more fundamental level than just passing an exam. He slowed me down and made sure I didn’t miss anything.

Tom’s Take

Duane was as much a mentor in my career as anyone. We learned from each other and we made sure to check each other’s work. He taught me that slow and steady is just as important as getting things done at warp speed. His need to triple check everything led me to do the same in the CCIE lab and is probably the reason why I eventually passed. His documentation and diagrams taught me to pay attention to the details. In the end he helped me become who I am today. Treasure the people you work with that take the time to do things right. It may take them a little longer than you’d like but in the end you’ll be happier knowing that they are there to make sure.

by networkingnerd at August 13, 2021 02:46 PM