[[ This is a content summary only. Visit my website for full links, other content, and more! ]]
Cisco’s big announcement this week ahead of Cisco Live was their new Intent-based Networking push. This new portfolio does include new switching platforms in the guise of the Catalyst 9000 series, but the majority of the innovation is coming in the software layer. Articles released so far tout the ability of the network to sense context, provide additional security based on advanced heuristics, and more. But the one thing that seems to be getting little publicity is the way you’re going to be paying for software going forward.
Cisco licensing has always been an all-or-nothing affair for the most part. You buy a switch and you have two options – basic L2 switching or everything the switch supports. Routers are similar. Through the early 15.x releases, Cisco routers could be loaded with an advanced image that ran every service imaginable. Those early 15.x releases gave us some attempts at role-based licensing for packet, voice, and security device routers. However, those efforts were rolled back due to customer response.
Shockingly, voice licensing has been the most progressive part of Cisco’s licensing model for a while now. CallManager 4.x didn’t even bother. Hook things up and they work. 5.x through 9.x used Device License Units (DLUs) to help normalize the cost of expensive phones versus their cheaper lobby and break room brethren. But even this model soon gave way to the current Unified Licensing models that attempt to bundle phones with software applications to mimic how people actually communicate in today’s offices.
So where does that leave Cisco? Should they charge for every little thing you could want when you purchase the device? Or should Cisco leave it wide open to the world and give users the right to decide how best to use their software? If John Chambers had still been in charge of Cisco, I know the answer would have been very similar to what we’ve seen in the past. Uncle John hated the idea of software revenue cannibalizing their hardware sales. Like many stalwarts of the IT industry, Chambers believed that hardware was king and software was an afterthought.
But Chuck Robbins has different ideas. Alongside the new capabilities of Cisco’s Intuitive Network plan they have also introduced a software subscription model. Now, if you want to use all these awesome new features for the future of the network according to Cisco you are going to pay for them. And you’re going to pay every year you use them.
It’s not that radical of a shift in mindset if you look at the market today. Cable subscriptions are going away in favor of specialized subscriptions to specific content. Custom box companies will charge you a monthly fee to ship you random (and not-so-random) items. You can even set up a subscription to buy essential items from Amazon and Walmart and have them shipped to your home regularly.
People don’t mind paying for things that they use regularly. And moving the cost model away from capital expenditure (CapEx) to an operational expenditure (OpEx) model makes all the sense in the world for Cisco. Studies from industry companies like Infinity Research have said that Infrastructure as a Service (Iaas) growth is going to be around 46% over the next 5 years. That growth money is coming from organizations shift CapEx budget to OpEx budget. For traditional vendors like Cisco, EMC, and Dell, it’s increasingly important for them to capture that budget revenue as it moves into a new pool designed to be spent a month or year at a time instead of once every five to seven years.
The end goal for Cisco is to replace those somewhat frequent hardware expenditures with more regular revenue streams from OpEx budgets. If you’re nodding your head and saying, “That’s pretty obvious…” you are likely from the crowd that couldn’t understand why Cisco kept doubling down on bigger, badder switching during the formative years of SDN. Cisco’s revenue model has always looked a lot like IBM and EMC. They need to sell more boxes more frequently to hit targets. However, SDN is moving the innovation away from the hardware, where Cisco is comfortable, and into the software, where Cisco has struggled as of late.
Software development doesn’t happen in a vacuum. It doesn’t occur because you give away features designed to entice customers into buying a Nexus 9000 instead of a Nexus 6000. Software development only happens when people are paying money for the things you are developing. Sometimes that means that you get bonus features that they figure out in the process of making the main feature. But it surely means that the people focused on making the software want to get it right the first time instead of having to ship endless patches to make it work right eventually. Because if your entire revenue model comes from software, it had better be good software that people want to buy and continue to pay for.
I think Chuck Robbins is dragging Cisco into the future kicking and screaming. He’s streamlined the organization by getting rid of the multitude of “pretenders to the throne” and tightening up the rest of the organization from a collection of competing business units into a logically organized group of product lines that can be marketed. The shift toward a forward-looking software strategy built on recurring revenue that isn’t dependent on hardware is the master stroke. If you ever had any doubts about what kind of ship Chuck was going to sail, this is your indicator.
In seven years, we’re not going to be talking about Cisco in the same way we did before. Much like we don’t talk about IBM like we used to. The IBM that exists today bears little resemblance to Tom Watson’s company of the past. I think that the Cisco of the future will bear the same superficial resemblance to John Chamber’s Cisco as well. And that’s for the better.
I developed over a dozen different Ansible-based network automation solutions in the last two years for my network automation workshops and online course, and always published them on GitHub… but never built an index, or explained what they do, and why I decided to do things that way.
With the new my.ipSpace.net functionality I added for online courses I got the hooks I needed to make the first part happen:Read more ...
Automation is an area where IT has always been somewhat nervous, and historically this is with good reason. In the past, I worked for two antivirus vendors where a weekly signature update was released that caused clients to overwrite legitimate files with zero-byte replacements.
This Internet-Draft resonates strongly with me: Jon Postel’s famous statement in RFC 1122 of “Be liberal in what you accept, and conservative in what you send” – is a principle that has long guided the design of Internet protocols and implementations of those protocols. The posture this statement advocates might promote interoperability in the short […]
One of my readers sent me a list of questions on asymmetrical traffic flows in IP networks, particularly in heavily meshed environments (where it’s really hard to ensure both directions use the same path) and in combination with stateful devices (firewalls in particular) in the forwarding path.
Unfortunately, there’s no silver bullet (and the more I think about this problem, the more I feel it’s not worth solving).Read more ...
The inevitable summer decline of visitors has started, so I'm switching (like every summer) to a lower publishing frequency. Given my current focus (here and here) expect one network automation post and one other in-depth post every week… and maybe an occasional this-is-worth-reading link.
Take some time off, enjoy the vacations, and I hope to meet you in the September online course ;)
In this post, I will share many network engineering blogs which will be very beneficial for the network engineering and for those who want to learn more about network design. Almost everyday I receive a message through social media or via email from the connections. What should we study ? I am new […]
The post Some recommendations for the network engineers appeared first on Cisco Network Design and Architecture | CCDE Bootcamp | orhanergun.net.
MPLS Traffic Engineering is a mechanism that provides cost savings in an MPLS networks. How cost saving can be achieved ? How traffic is steered to the paths which wouldn’t be used in normal circumstances ? I will explain in this post. Let’s look at below topology. MPLS Traffic Engineering […]
Do you agree that this is similar to Windows Firewall ? 🙂 Made my day
The post Windows Firewall ? This made me laugh a lot :) appeared first on Cisco Network Design and Architecture | CCDE Bootcamp | orhanergun.net.
I was at Pure Accelerate 2017 this week and I saw some very interesting things around big data and the impact that high speed flash storage is going to have. Storage vendors serving that market are starting to include analytics capabilities on the box in an effort to provide extra value. But what happens when these advances cause issues in the training of algorithms?
One story that came out of a conversation was about training a system to recognize people. In the process of training the system, the users imported a large number of faces in order to help the system start the process of differentiating individuals. The data set they started with? A collection of male headshots from the Screen Actors Guild. By the time the users caught the mistake, the algorithm had already proven that it had issues telling the difference between test subjects of particular ethnicities. After scrapping the data set and using some different diverse data sources, the system started performing much better.
This started me thinking about the quality of the data that we are importing into machine learning and artificial intelligence systems. The old computer adage of “garbage in, garbage out” is never more apt today than it has been in history. Before, bad inputs caused data to be suspect when extracted. Now, inputting bad data into a system designed to make decisions can have even more far-reaching consequences.
Look at all the systems that we’re programming today to be more AI-like. We’ve got self-driving cars that need massive data inputs to help navigate roads at speed. We have network monitoring systems that take analytics data and use it to predict things like component failures. We even have these systems running the background of popular apps that provide us news and other crucial information.
What if the inputs into the system cause it to become corrupted or somehow compromised? You’ve probably heard the story about how importing UrbanDictionary into Watson caused it to start cursing constantly. These kinds of stories highlight how important the quality of data being used for the basis of AI/ML systems can be.
Think of a future when self-driving cars are being programmed with failsafes to avoid living things in the roadway. Suppose that the car has been programmed to avoid humans and other large animals like horses and cows. But, during the import of the small animal data set, the table for dogs isn’t imported for some reason. Now, what would happen if the car encountered a dog in the road? Would it make the right decision to avoid the animal? Would the outline of the dog trigger a subroutine that helped it make the right decision? Or would the car not be able to tell what a dog was and do something horrible?
After some chatting with my friend Ryan Adzima, he taught me a bit about how facial recognition systems work. I had always assumed that these systems could differentiate on things like colors. So it could tell a blond woman from a brunette, for instance. But Ryan told me that it’s actually very difficult for a system to tell fine colors apart.
Instead, systems try to create contrast in the colors of the picture so that certain features stand out. Those features have a grid overlaid on them and then those grids are compared and contrasted. That’s the fastest way for a system to discern between individuals. It makes sense considering how CPU-bound things are today and the lack of high definition cameras to capture information for the system.
But, we also must realize that we have to improve data collection for our AI/ML systems in order to ensure that the systems are receiving good data to make decisions. We need to build validation models into our systems and checks to make sure the data looks and sounds sane at the point of input. These are the kinds of things that take time and careful consideration when planning to ensure they don’t become a hinderance to the system. If the very safeguards we put in place to keep data correct end up causing problems, we’re going to create a system that falls apart before it can do what it was designed to do.
I thought the story about the AI training was a bit humorous, but it does belie a huge issue with computer systems going forward. We need to be absolutely sure of the veracity of our data as we begin using it to train systems to think for themselves. Sure, teaching a Jeopardy-winning system to curse is one thing. But if we teach a system to be racist or murderous because of what information we give it to make decisions, we will have programmed a new life form to exhibit the worst of us instead of the best.
HOW MUCH DO I LOVE THE BOFH ON THE REGISTER!!!!!!!!!! “YOU JOINED A WEBINAR – or, as we call it – willingly watched an advert?” “I… It wasn’t an advert.” “Right, so if someone came up to you and suggested that they take half an hour out of your day – at a time […]
The post Outburst: BOFH – Halon is not a rad new vape flavour appeared first on EtherealMind.
What makes for a successful protocol ? Which protocol is successful and why ? Have you ever been asked these questions ? As an engineer you cannot say I believe Protocol X is successful or Protocol Y is not. There is nothing like ‘ I believe ‘. There should always a science behind […]
The post What makes for a successful protocol ? appeared first on Cisco Network Design and Architecture | CCDE Bootcamp | orhanergun.net.
It looks like one of the best (or worst) kept secrets about the CCIE has finally come to pass. This week, Cisco announced that there is a new program in place to recertify your CCIE without the need to continually retake the written exam. How is this going to measure up?
The idea behind continual recertification is very simple. Rather than shut down what you’ve got going on every 18 months to spend time studying for an exam, Cisco is giving current CCIEs and CCDEs the option of applying credit from educational sessions toward recertifying their credentials.
This is very similar to the way that it works in for a doctor or a lawyer. There are courses that you can take that provide a certain number of “points” for a given class. When you accumulate 100 points in a two year span, you can apply those points to recertification.
The credits are good for a maximum of three years from the date earned. You can’t carry them over between recertification periods or bank them in case your certification expires. Once you use the points to recert, you start back up the treadmill again.
One of the more interesting pieces to come out of the CCIE/CCDE CPE process is the emphasis on sessions as Cisco Live. Each of the sessions, from one hour breakout to 8-hour Techtorial and labs have a point value assigned. You can earn up to 70 points at Cisco Live every year through this method.
This is huge because it places a focus back on sessions at Cisco Live. A lot of networking professionals that I’ve spoken to recently have questioned the need to come to sessions during Cisco Live. Many are more interested in the DevNet zone as opposed to traditional learning sessions. Still others are buying a social pass and coming to chat with peers instead.
Now, Cisco has made sessions at Cisco Live matter to CCIEs. You get more points for harder sessions. Or longer in-depth dives into Technologies that are up and coming. You could easily recertify with very little effort every second year at Cisco Live.
Additionally, the program includes the flexibility to offer different types of continuing credit. Perhaps it’s for filling out surveys of importance to product teams. Or for tackling a new technology in an in-person instructor format. The possibilities are unlimited and should keep current and Emeritus CCIEs happy.
I’m thrilled these changes are finally implemented. CCIEs can finally join the ranks of other professionals in the world. I’ve been talking about getting this done for four years at this point, so hats off to Yusuf and his team. Let’s keep the steam rolling forward and getting more learning opportunities on the list to help get more CCIEs recertified.
Monitoring SDN Networks is the featured webinar of June 2017, and in the featured video Terry Slattery (CCIE#1026) talks about network analysis of SDN.
If you’re a trial subscriber, log into my.ipspace.net, select the webinar from the first page, and watch the video marked with star… and if you’d like to try the ipSpace.net subscription register here.
Sounds promising? Why don’t you register before we run out of early-bird tickets?
If you'd come to me as a networking engineer and say “there's one new thing I want to learn that's outside of my $dayjob” I'd probably say “invest some serious time into learning Git (beyond memorizing the quick recipes) if you haven’t done that already”
Full disclosure: not so long ago I tried to avoid Git as much as possible… and then it suddenly clicked ;)Read more ...
Does any sane bystander not see the IPv6 standards process as a terrible road accident ?
The post Response: draft-bourbaki-6man-classless-ipv6-00 – IPv6 is Classless appeared first on EtherealMind.
Imagine a service provider that allows you to provision 100GE point-to-point circuit between any two of their POPs through a web site and delivers in seconds (assuming you’ve already solved the physical connectivity problem). That’s the whole idea of SDN, right? Only not so many providers got there yet.Read more ...
Long story short: I’m launching Ansible for Networking Engineers self-paced course today. It’s already online and you can start whenever you wish.
Now for the details…
Isn’t there already an Ansible for Networking Engineers webinar? Yes.
So what’s the difference? Glad you asked ;)Read more ...
Lower price, scalability, and the need for a global footprint are still the major drivers for both cloud migration and the choice of cloud provider. However, availability and the sophistication of emerging technologies such as machine learning, artificial intelligence, Internet-of-Things (IoT), and image and voice services, which are now built into the cloud platform, are also becoming key considerations when choosing cloud platforms. The allure of quickly incorporating these technologies with a couple mouse clicks and a few APIs is very powerful, especially when considering the time and cost savings compared to developing these capabilities in-house, or finding and establishing relationships with the multiple vendors needed to implement these technologies.
During Shawn Zandi’s presentation describing large-scale leaf-and-spine fabrics I got into an interesting conversation with an attendee that claimed it might be simpler to replace parts of a large fabric with large chassis switches (largest boxes offered by multiple vendors support up to 576 40GE or even 100GE ports).
As always, you have to decide between implicit and explicit complexity.Read more ...
Many vendors require that customers purchase everything from them in order to provide a complete, end-to-end security solution. However, the reality is that most enterprises are multivendor environments. Any solution that requires swapping out existing infrastructure during a refresh cycle, or locks customers into a single vendor, imposes significant restrictions with respect to introducing new capabilities and adopting new technologies.
With the SDSN platform, you can still quarantine or block infected hosts in a multivendor environment, without swapping out your existing infrastructure. Imagine not having to write off the thousands or even millions of dollars in equipment investments while taking your security game to the next level. It’s a solution that makes practical sense.
A password management business has their passwords compromised. IT Security comedy gold.
The post Response: OneLogin Breach Compromised Customer Data, Ability to Decrypt Encrypted Data | Threatpost appeared first on EtherealMind.
Lately, it seems that every time we turn around, there’s a cyber-assault, potentially more dangerous and more devious than the last. There’s the real threats and attacks like WannaCry. And there’s the apparently fabricated news you see on television and in theaters. We appear to be surrounded by virtually any sort of potential cybercrime. But we shouldn’t have to accept this as normal.
On top of this very active threat climate, organizations are drowning in the complexity of dozens of “best-of-breed” security solutions that get pulled together in an effort to build a proper defense solution. On top of this, organizations face a flood of alerts on many different consoles, and need to try and keep numerous security policies up-to-date. Did you know that most policies are written once and rarely updated? These go mostly unnoticed until there’s a security incident and the root cause analysis points to an ancient policy that was left unattended.
Last week I published self-study exercises for the YAML and Jinja2 modules in the Ansible for Networking Engineers webinars, and a long list of review questions for the Using Ansible and Ansible Deeper Dive sections.
I also reformatted the webinar materials page. Hope you’ll find the new format easier to read than the old one (it’s hard to squeeze over 70 videos and links on a single page ;).
Anyone working in IT Infrastructure need to have some awareness of what is happening on the Internet. This is the fastest, densest, most compressive information you can get in 30 minutes. You can see into the future if you look hard enough, this is a major source. Link: 2017 Internet Trends — Kleiner […]
The post Research: Mary Meeker’s 2017 internet trends report appeared first on EtherealMind.
Michael Klose left an interesting remark on my Regional Internet Exits in Large DMVPN Deployment blog post saying…
Would BGP communities work? Each regional Internet Exit announce Default Route with a Region Community and all spokes only import default route for their specific region community.
That approach would definitely work. However, you have to decide where to move the complexity.Read more ...
It’s the bake off you’ve been waiting for; a five-month real life test of a Ubiquiti ERPro-8 (EdgeRouter Pro) and a tub of Play-Doh™! Over a period of five months I carefully evaluated these two very useful items and discovered their good and bad points. But in the end, which will I choose as the ultimate winner?
The first item to be evaluated is the tub of Play-Doh™. I didn’t skimp on quality, and bought genuine Play-Doh™ as part of a set which included tubs of both light blue and red. Accessories in the box included a roller, a plastic cutting knife, four shaped cutters and an extruder with four built-in shapes as well as four interchangeable extrusion heads.
The entire kit cost me $9.69 from Amazon, which I feel is pretty good value for the amount of artistic fun packed into a single box.
The packaging was a little fiddly to get into at first, and the tubs – as always – were a little tricky to open and required more force than a typical child is able to provide. However, once open the tubs happily released their colorful content which was smooth to the touch. The accompanying tools had some slightly raised seams which perhaps should have been sanded off during the production process, but there were no sharp edges to be found, even on the plastic knife.
The Ubiquiti EdgeRouter™ Pro 8 is an eight-port gigabit router capable of transferring over 2 million packets per second. The ERPro-8 features eight copper RJ-45 ports and two gigabit SFP ports, though only 8 ports can be used at any one time (ports 7 and 8 can use either be copper or SFP). Price was a mere $215 from some guy on eBay, and the router was brand new in the box and unused. List price for those less daring and wishing to avoid eBay is $369, although they can be bought for $345 on Amazon.
The ERPro-8 runs EdgeOS™ offers a web-based user interface (UI) as well as the option to configure using command line. Interfaces can be bridged together (with an optional layer 3 virtual interface), support 802.1q trunking, and LACP link aggregation. The OS can act as a firewall; by default the web UI supports an interface-based firewall (similar to a Cisco ASA), but by using the CLI it is possible to configure the ERPro-8 to run as a zone-based firewall instead (similar to a Juniper SRX).
Once I had slid the ERPro-8 out of its covering and opened the box inside, I was delighted to find a very solidly-built 1U high 19″ rack-mountable chassis. The rack ears were not a separate accessory as the rack mounting is already built in as part of the thick, one-piece front panel. A power cable is included, and first impressions of the hardware were very good; all indications were that this was a well-built piece of hardware, and—as much as a piece of network hardware can be good-looking—it was attractive.
What to say about Play-Doh™? Well, it’s fun and reusable, for one. I was able to make many different shapes, some of which bore a vague resemblance to things that people might recognize, and more of which did not. I was able to extrude the Play-Doh™ using the various built-in shapes as well as the noodly extrusion attachments, which meant I was able to make some create curly hair for the group of pretend Play-Doh™ friends I had made (don’t mock me, I need the company).
Over the weeks and months, my Play-Doh™ companions and I had pretend picnics and feasts the likes of which are now legendary in introvert circles, though nobody really talks about them. At the end of each meal, it was simple to pull apart my colorful mates and stuff them heartlessly back into the Play-Doh™ tubs ready to be reimagined the next time I wanted to revive them.
Over time, I did lose a little of the
doh, and some got a little dirty (my little folk are messy eaters), so after five months I have slightly less Play-Doh™ than I started with and it’s not quite as clean and brightly colored. Still, it was lots of fun!
Initial set up of the ERPro-8 is fairly simple. The serial console port unusually, but pleasingly, defaults to running at 115,200bps and I thank Ubiquiti for that because I am perpetually frustrated by modern devices defaulting to the almost unusably-slow 9600bps. Tip to other vendors: I am very capable of looking up the default console port speed for a given piece of hardware and configuring my terminal software or console server accordingly; it is not necessary to always default to the lowest common denominator. Anybody who only has 9600bps support these days probably needs to consider a hardware refresh.
Once in the web UI, the biggest challenge with EdgeOS™ for somebody used to Juniper or Cisco is that the terminology and the configuration logic is somewhat different. However, it didn’t take me long to adjust to the different approach to configuration, and soon enough I was able to configure the router with a gigabit link into my home network running 802.1q trunking. For my main home network I run a resilient pair of ISC DHCP servers, but I decided that for the purposes of simplicity I would configure the ERPro-8 to provide DHCP services for my other VLANs.
In order to allow me to test the ERPro-8 before committing to letting it run my home Internet connection, I gave it a user-space address on my main home subnet and connected the outside interface to the built-in switch on my provider’s gateway device.
By manually configuring a default gateway of 10.1.1.254 on my test computer, I was able to run the Ubiquiti ERPro-8 in parallel to my existing Juniper SRX220H and ensure that everything was working before I switched the entire network over to use the Ubiquiti.
I have internet service that my provider says is 1Gbps symmetrical, but with the Juniper SRX220H maxing out 950Mbps in ideal conditions, and more realistically delivering a whopping 300Mbps IMIX, I had been unable to validate the claim. Using the ERPro-8 as my gateway, internet speed tests confirmed that the ERPro-8 was delivering a very healthy 900+ Mbps of throughput, and I had no doubt that the ERPro-8 was not the bottleneck in this particular situation. After all, I was only connected to the home network over a single gigabit ethernet connection with no jumbo frames, so that’s pretty good going. I would have rushed to add an aggregated ethernet link between my home network switch and the ERPro-8, but since the WAN link was also running on a single gigabit Ethernet connection I felt this might be a somewhat pointless move.
In operation, the system was stable and reliable. In order to shift all my home network traffic to the ERPro-8, I enabled VRRP with .1 as the virtual IP, and unplugged the Juniper SRX220H which currently had that IP configured. In an ideal world perhaps I could have run VRRP on both the SRX and the ERPro-8 so that in the event the ERPro-8 went down, the SRX could take over. Perhaps that’s a task for another time. Beyond that, the Ubiquiti ERPro-8 was exactly what I needed; a device that I did not need to log into very often, because once I had set it up, it just worked.
My overall experience with the ERPro-8 was that it was simple to configure, fast, and stable. Plus, as a 19″ rack mountable device, it fit nicely into my network rack.
The Play-Doh™ gave me hours of enjoyment. I did my best not to mix the colors because that would limit the reusability of the product, but some mixing did inevitably occur. However, despite five months of frequent extrusion, regular cutting with a blunt plastic knife, being rolled out and having shapes brutally stamped out of it by plastic cutters, at this point my only complaint about the Play-Doh™ is that some parts of it (usually those which I left out a bit long before trying to put back in the pot) have dried up very slightly, and the colors have dulled a bit. I simply disposed of a few tiny bits, which had totally dried out, and everything else, once mixed thoroughly back into the larger mass in the pot, recovered nicely. Given the minimal capital investment required to obtain Play-Doh™ for my home, I’m particularly impressed by the product’s longevity in the face of modeling fun.
Over all, I give the Play-Doh™ play set a Highly Recommended.
The Ubiquiti ERPro-8 performed like a champ for five months, and I was thoroughly impressed with its speed and reliability. And then, while installing a switch in the same rack, I managed to knock out the power lead. Internet outage in the house! Oh noes! Once I had calmed the children down, I plugged the lead back in and waited while it rebooted. Hmm, the internet is still down. I waited a couple more minutes, but no joy. I pinged my default gateway, but there was no response, suggesting that the Ubiquiti ERPro-8 was not responding. This is the point where the handy 115200bps console came into its own.
I connected a console cable and watched the Ubiquiti ERPro-8. I had a login prompt, but was unable to authenticate to it using either the default Ubiquiti username/password or the credential I had created for the administrative accounts. Watching again I determined that the ERPro-8 was stuck in some kind of loop and was attempting repeatedly to load, but was failing with errors like this:
SQUASHFS error: squashfs_read_data failed to read block 0x3dd1216 SQUASHFS error: Unable to read data cache entry [3dd1216] SQUASHFS error: Unable to read page, block 3dd1216, size a1cb SQUASHFS error: zlib_inflate error, data probably corrupt SQUASHFS error: squashfs_read_data failed to read block 0x3dd1216 SQUASHFS error: Unable to read data cache entry [3dd1216] SQUASHFS error: Unable to read page, block 3dd1216, size a1cb SQUASHFS error: Unable to read data cache entry [3dd1216] SQUASHFS error: Unable to read page, block 3dd1216, size a1cb SQUASHFS error: Unable to read data cache entry [3dd1216] SQUASHFS error: Unable to read page, block 3dd1216, size a1cb
Ruh roh; it looks like the power loss led to a corrupted file system on the flash. (I subsequently power cycled the device again, and you can read the entire boot log if you’re interested.) No worries, I told myself, I’ll reimage it and it should be fine.
This is where things began to get messy; as my hopes of recovery decreased, the profanity increased. Researching this issue on the web led me to discover that corrupting the file system is not entirely uncommon for the EdgeRouter series of devices. Unfortunately the common theme for the Ubiquiti ERPro-8 seemed to be that this was a non-recoverable error. You read that correctly; one power loss led to a corrupted file system, and there is apparently no way to recover it. I knew that couldn’t be true, so I opened a support case with Ubiquiti, to be told:
There seems to be an issue with the EdgeRouter. So i'd suggest you to file for an RMA if the Router is under warranty.
No solution offered, no workarounds, just the option to file for RMA / warranty repair. That’s good except, as you had probably guessed, the down side of buying from eBay is that the reseller doesn’t offer a warranty (which is entirely reasonable). I didn’t expect success filing an RMA with Ubiquiti, and I was not disappointed; evidently while I had purchased a brand new device it had clearly sat on a shelf for a while because the date code indicated it was more than a year old so Ubiquiti won’t RMA the device. That’s ok, by the way; I don’t expect Ubiquiti to deal with devices indefinitely and I hold no grudge that they could not RMA a router with an old date code.
I remain troubled that a router could suffer a power loss and be bricked with no resolution available. Digging around the Internet revealed that the Ubiquiti EdgeRouter Lite (ERL) and EdgeRouter PoE (ER-PoE) suffer the same issue, and a solution of sorts does exist. However, the solution did not come from Ubiquiti as one might have expected. Instead, a third party figured out how to write a tool which could be loaded on to the ERL and ER-PoE and could install a clean image so that the device could reboot. The only problem? It does not work on the ERPro-8. Firstly, why does this recovery solution not originate from Ubiquiti itself? What does it say about Ubiquiti Networks that the company is willing to leave owners of ERL and ER-PoE routers unable to recover their routers when clearly a technical solution exists (but somebody else had to create it)? Worse, why is the ERPro-8 left with no option for recovery?
Many network devices are able to install new firmware by booting from a USB drive with appropriate files installed on it; not so for the ERPro-8. The Quick Start Guide for the Ubiquiti ERPro-8 describes the USB port as
Reserved for future use. You know, I think I’ve found a current use for that USB port, and it might help Ubiquiti retain a few customers too.
Another common method of directing a blank device to a firmware image is to use a network boot of some sort, relying on DHCP to point the router to a location where it can download the firmware itself or a loader program (think PXEboot, ONIE and similar). Some may require vendor software to be run in order to discover the device on the network and load the firmware. Again, it seems that there is no solution for the Ubiquiti ERPro-8.
What I have, after five successful months of use and one power loss, is a pile of steel and electronics which is too large to be used as a door stop, and too small to be used as furniture. My Ubiquiti ERPro-8 has been bricked by a simple power loss, and there is no way to recover it (at least not one that Ubiquiti, even once it knew it wouldn’t have to deal with it under warranty or RMA, was willing to share). After five months, the Play-Doh™ kit probably has a higher resale value than this Ubiquiti EdgeRouter Pro.
Ubiquiti’s website proudly boasts about the ERPro-8’s
That uptime apparently comes at a price, which is that once the filesystem has been corrupted, the Ubiquiti ERPro-8 has about the same level of network reliability as this smiling pile of poop:
Think about it; neither my bricked Ubiquiti ERPro-8 nor the pile of poop are any use whatsoever in the role of a network router. Both have their own particular aesthetic appeal, but neither is really of any use to anybody. Actually I take that back. I can send the smiling pile of poop in messages to my friends as many times as like, so the poop pile does in fact have a purpose.
Seriously Ubiquiti, what is the point of touting high uptime when the chances are that a power loss could apparently lead to high downtime?
After my five month evaluation of the Ubiquti ERPro-8, I suspect that the only positive thing to come out of this will be that I get to loose some aggression on the router with a large hammer. I’d use a baseball bat, à la Office Space, but I’d hate to damage the baseball bat on the ERPro-8, because—credit where it’s due—the case looks quite sturdy.
I find myself torn because for the period during which the router worked, it was quite excellent, and offered features and performance that far outstripped even its list price. But one power loss means it’s a worthless heap of junk? I can only draw one conclusion I’m afraid, and that is:
DO NOT UNDER ANY CIRCUMSTANCES BUY THE UBIQUITI EDGEROUTER PRO (ERPRO-8).
IN FACT, AVOID IT LIKE THE PLAGUE.
IF SOMEBODY BUYS YOU ONE, ASSUME IT’S A BAD ATTEMPT AT A GAG GIFT AND THROW IT BACK AT THEM.
I do not offer this advice lightly. I have been a big fan of Ubiquiti devices for a while now (even before I owned some myself) and in addition to the EdgeRouter Pro described here I also own three UAC-AC-PRO wireless access points and a Unifi US-48 Switch. I like this hardware and I think it’s good value for money. Unfortunately, the EdgeRouter Pro just proved to me that ultimately, perhaps you really do get what you pay for, and going forward I will be actively recommending against investment in Ubiquiti products unless somebody can convince me otherwise.
If you’re asking yourself why I would recommend against all Ubiquiti products, the answer is fairly simple. Think about it: the corrupted squashfs issue was reported on the Ubiquiti forums in May 2014. It may have occurred before then too, but let’s be generous and take that as “patient zero.” In the three years since that time, Ubiquiti has apparently done nothing to resolve an issue known to totally brick the EdgeRouter Pro. Shame on you, Ubiquiti; shame on you. If this is how Ubiquiti Networks treat their customers with one product, I see no reason not to assume that the same
couldn’t care less attitude has been applied across the entire portfolio. I’m not willing to put my own reputation on the line and recommend Ubiquiti Networks products to others on the off chance that this is an isolated example.
Given the choice between wasting endless hours trying to recover my bricked ERPro-8 or wasting endless hours extruding Play-Doh™ into absurd piles of noodly nonsense and smooshing it with my fingers, I choose Play-Doh™ as the winner of this five month evaluation. Congratulations, Hasbro!
Meanwhile, I’m going to start digging into a page I found recently where somebody thinks they might have a potential solution to reinstalling a backup using ‘dd’. I guess I’ll find out, because apparently Ubiquiti Networks isn’t going to help me with this.
If you liked this post, please do click through to the source at Epic Evaluation: Ubiquiti ERPro-8 vs Play-Doh and give me a share/like. Thank you!