Back before there was an Internet. . .
Back before there was an Internet. . .
David Spark published 16 tips for moving your workloads to the clouds. Contrary to the usual useless nonsense coming down from hybrid cloud evangelist (you know, the people who moved from “VMs following the sun” to “seamless hybrid cloud workload mobility”) some of the tips actually make sense, starting with “Have a real reason for the migration”. Enjoy!
Arista switches have an API known as eAPI. In this article, I will discuss some of the basics of how eAPI operates, how to connect to it, and how to gather network information using it. Basic eAPI operation eAPI uses JSON-RPC over HTTPS. What this means in simpler terms is that the communication to and […]
Out of sheer frustration this week, I tweeted this and got a big response: The Back Story I’ve wasted about 60 hours of customers time working with resellers & vendors to get a quote for a relatively simple network upgrade. Neither the vendor staffers or the reseller employees knew how the product was licensed or […]
A while ago I wrote about performance bottlenecks of Open vSwitch. In the meantime, the OVS team drastically improved OVS performance resulting in something that Andy Hill called Ludicrous Speed at the latest OpenStack summit (slide deck, video).
Let’s look at how impressive the performance improvements are.Read more ...
After describing the current state of affairs in his Network Programmability 101 webinar, Matt Oswald moved to the low-hanging fruits: automating repetitive tasks in baby steps, from VLAN provisioning to consistent device configurations.
Microsoft released an out-of-band update, MS14-068, yesterday to patch a critical bug in its Kerberos implementation. This bug could allow a remote, unprivileged, authenticated attacker to elevate their privileges to that of any other domain user. Such an attack could also enable the attacker to obtain domain administrator privileges and completely compromise the security restrictions enforced on the targeted domain.
The sale of a incumbent local exchange carrier (ILEC) aka the local telephone company can be much more complicated than one might think, ordinary folks anyway. Networking & IT professionals most likely have a different viewpoint as migrations are a fundamental part of the IT field. Such a transaction becomes more complicated when triple play […]
Mitchell currently is a Cisco Certified Entry Level Tech (CCENT Certified) with intentions of obtaining higher as time permits.He graduated from the Connecticut Technical High School system with a focus in Information Systems Technology (Combination of Higher Ed MIS & Computer Science). While there he developed the production computing platform for his academic department( servers, networking, desktop).
He is currently pursing higher education from the UCONN School of Business & welcomes any opportunity to further advance his experience in the IT Field & professional knowledge.
It came to my attention and I was rather surprised to learn a while back that the Linux ifconfig command has been deprecated for quite some time by the Linux ip command set. The ip command isn’t new to me and I’ve recognised its advantages for some time but considering its ‘elevated’ status I thought […]
He's worked in the IT industry for over 15 years in a variety of roles, predominantly in data centre environments. Working with switches and routers pretty much from the start he now also has a thirst for application delivery, SDN, virtualisation and related products and technologies. He's published a number of F5 Networks related books and is a regular contributor at DevCentral.
Indeni has technology that can predict known types of network failures using pre-mortem analysis.
The post Stop Doing Post Mortems & Root Cause Analysis With indeni appeared first on Packet Pushers Podcast and was written by Sponsored Blog Posts.
One of my readers sent me an interesting challenge:
We have two MPLS providers sending us default routes and it seems like whenever we have problem with SP1 our failover is not happening properly and actually we have to go in manually and influence our traffic to forward via another path.
Welcome to the wondrous world of byzantine routing failures ;)Read more ...
Last week I went to go talk to a group of vocational students about networking. While I was there, I needed to send a couple of emails. I prefer to write emails from my laptop, so I pulled it out of my bag between talks and did the first thing that came to mind: I asked for the wireless SSID and password. Afterwards, I started thinking about how far we’ve come with connectivity.
I can still remember working with a wireless card back in 2001 trying to get the drivers to play nice with Windows 2000. Now, wireless cards are the rule and wired ports are the exception. My primary laptop needs a dongle to have a wired port. My new Mac Mini is happily churning along halfway across the room connected to my network as a server over wireless. It would appear that the user edge quietly became wireless and no tears were shed for the wire.
It’s also funny that a lot of the big security features like 802.1x and port security became less and less of an issue once open ports started disappearing in common areas. 802.1x for wired connections is barely even talked about now. It’s more of an authentication mechanism for wireless now. I’ve even heard some vendors of these solutions touting the advantages of using it with wireless and then throwing in the afterthought comment, “We also made it easy to configured for wired connections too.”
We still need wires, of course. Access points have to connect to the infrastructure. Power still can be delivered via microwave. But the shift toward wireless has made ubiquitous cabling unnecessary. I used to propose a minimum of four cable drops per room to provide connectivity in a school. I would often argue for six in case a teacher wanted to later add an IP phone and a couple of student workstations. Now, almost everything is wireless. The single wire powers a desk phone and an antiquated desktop. Progressive schools are replacing the phones with soft clients and the desktops with teach laptops.
The wire is not in any danger of becoming extinct. But it is going to be relegated to the special purpose category. Wires will only live behind the scenes in data centers and IDF closets. They will be the thing that we throw in our bag for emergencies, like an extra console cable or a VGA adapter.
Wireless is the future. People don’t walk into a coffee shop and ask, “Hey, where’s the Ethernet cable?” Users don’t crowd around wall plates with hubs to split the one network drop into four or eight so they can plug their tablets in. Companies like Aruba Networks recognized this already when they started posing questions about all-wireless designs. We even made a video about it:
While I don’t know that the all-wireless design is going to work, I can say with certainty that the only wires that will be running across your desktop soon will be power cables and the occasional USB cord. Ethernet will be relegated to the same class as electrical wires connected to breaker boxes and water pipes. Important and unseen.
The last day of Interop New York found me sitting in the Speaker Center with a few friends pondering the hype and reality of SDN and brokenness of traditional network products. One of the remarks during that conversation was very familiar: “we have too many knobs to configure”, and I replied “and how many knobs do you think there are in Windows registry?" (or Linux kernel and configuration files).Read more ...
Not so long ago, if you wanted to build a data center network, it was perfectly feasible to place your layer three edge on the top-of-rack switches and address each rack as its own subnet. You could leverage ECMP for simple load-sharing across uplinks to the aggregation layer. This made for an extremely efficient, easily managed data center network.
Then, server virtualization took off. Which was great, except now we had this requirement that a virtual machine might need to move from one rack to another. With our L3 edge resting at the top of the rack, this meant we'd need to re-address each VM as it was moved (which is apparently a big problem on the application side). So, now we have two options: We can either retract the L3 edge up a layer and have a giant L2 network spanning dozens of racks, or we could build a layer two overlay on top of our existing layer three infrastructure.
Most people opt for some form of the L2 overlay approach, because no one wants to maintain a flat L2 network with dozens or hundreds of thousands of end hosts, right? But why is that?
Chambers pointed the finger at Net Neutrality for a slow down in purchasing by carriers in US markets and that he perceives it as damaging to Cisco business interests. I find it more credible that SDN/NFV is slowing capital investments than some far off political change.
Overlay virtual networks are one of my favorite topics – it seems I wrote over a hundred blog posts describing various aspects of this emerging (or is it reinvented) technology since Cisco launched VXLAN in 2011.
During the summer of 2014 I organized my blog posts on overlay networks and SDDC into a digital book. I want to make this information as useful and as widely distributed as possible – for a limited time you can download the PDF free of charge.
I’m going to take a little break from my other two series to inject a short series on BGPSEC. I’ll return to HTIRW and RFCs you need to know shortly. BGPSEC is a set of standards currently under consideration in the IETF to secure BGP beyond the origin AS – in other words, to secure […]
Simon Wardley is another old-timer with low tolerance for people reinventing the broken wheels. I couldn’t resist sharing part of his blog post because it applies equally well to what we’re seeing in the SDN world:
No, I haven't read Gartner's recent research on this subject (I'm not a subscriber) and it seems weird to be reading "research" about stuff you've done in practice a decade ago (sounds familiar). Maybe they've found some magic juice? Experience however dictates that it'll be snake oil […]. I feel like the old car mechanic listening to the kid saying that his magic pill turns water into gas. I'm sure it doesn't ... maybe this time it will ... duh, suckered again.
Meanwhile the academics already talk about SDN 2.0.
Leon Adato, Technical Product Marketing Manager with SolarWinds is our guest blogger today, with a sponsored post on the topic of alerting. The Four Questions For people who are interested in monitoring, there is a leap that you make when you go from watching systems that YOU care about, to monitoring systems that other people […]
The post 4 Inevitable Questions When Joining a Monitoring Group, Pt. 1 appeared first on Packet Pushers Podcast and was written by Sponsored Blog Posts.
Like many of us Khalid Raza wasted countless hours sitting in meetings discussing hybrid WAN connectivity designs using a random combination of DMVPN, IPsec, PfR, and one or more routing protocols… and decided to try to create a better solution to the problem.
Viptela Secure Extensible Network (SEN) doesn’t try to solve every networking problem ever encountered, which is why it’s simpler to use in the use case it is designed to solve: multi-provider WAN connectivity.Read more ...
Last week, I wrote a blog post discussing the dangers of BGP routing leaks between peers, illustrating the problem using examples of recent snafus between China Telecom and Russia’s Vimpelcom. This follow-up blog post provides three additional examples of misbehaving peers and further demonstrates the impact unmonitored routes can have on Internet performance and security. Without monitoring, you are essentially trusting everyone on the Internet to route your traffic appropriately.
In the first two cases, an ISP globally announced routes from one of its peers, effectively inserting itself into the path of the peer’s international communications (i.e., becoming a transit provider rather than remaining a peer) for days on end. The third example looks back at the China Telecom routing leak of April 2010 to see how a US academic backbone network prioritized bogus routes from one of its peers, China Telecom, to (briefly) redirect traffic from many US universities through China.
Recap: How this works
To recap the explanation from the previous blog (and to reuse the neat animations our graphics folks made), we first note that ISPs form settlement-free direct connections (peering) in order to save on the cost of sending traffic through a transit provider. Suppose that ISP A and ISP B establish such a private link between their networks. At the BGP routing level, ISP A will then send routes from its customers to its peer ISP B, who will in turn send these routes on to its customers. As a result, the customers of ISP B will send traffic destined for ISP A through the newly established peering link, saving ISP B from having to pay its transit providers to carry the traffic. This flow of routes and traffic is illustrated below.
The first way this can go wrong is for ISP B to announce the routes received from ISP A out to the global Internet (through its transit providers) or to ISP B’s other peers. By doing this, ISP B inserts itself onto the path of incoming traffic to ISP A from outside ISP B’s own network, something ISP A certainly didn’t expect when it took on ISP B as a peer.
ISP B can also mess up by sending routes learned either from its transit providers or peers to ISP A. If these routes are accepted by ISP A (and they typically will be), such errors put ISP B onto the path of outgoing traffic from ISP A to the networks erroneously announced along this peering link.
These two scenarios can happen independently. As shown in our last blog, China Telecom leaked routes to and from Vimpelcom numerous times throughout the year. Most of these incidents involved China Telecom leaking routes it learned from Vimpelcom out to the global Internet (scenario 1); however, on a few occasions, China Telecom also passed a full or partial routing table to Vimpelcom (scenario 2), altering how traffic flowed out of Vimpelcom.
Additional recent examples of peering leaks
Yandex is essentially the Russian version of Google. It is the dominant Russian-language search engine and, like Google, Yandex has established a lot of peering links — although, for obvious reasons, with a greater emphasis on the Russian-speaking world. Beltelecom is the incumbent telecom of Belarus and has become a recurring character in this blog (either for globally routing RFC6598 address space or MITM hijacks). Beltelecom and Yandex have a peering relationship, as it makes a lot of sense for eyeball networks (Beltelecom) and content providers (Yandex) to try to save on transit costs by interconnecting. However, for twelve days this year, Beltelecom announced routes it learned from Yandex to its transit provider Telecom Italia (i.e., leak scenario 1 from above).
The AS paths of the impacted routes took the following form:
… 6762 6697 13238 …
This AS path shows that routes from Yandex (AS13238) shared with its peer Beltelecom (AS6697) who leaked them to Telecom Italia (AS6762), a global Tier 1 provider. Normally no provider outside of Belarus would use Beltelecom to reach Yandex.
The result was that traffic destined for Yandex from customers around the world in Telecom Italia’s downstream cone was misdirected first to Beltelecom. For Yandex’s networks in Russia (which shares a border with Belarus), the impact on latency was minor. However, Yandex has networks outside of Russia (including some in the Netherlands and the United States) and, for those networks, the latency and paths were dramatically altered. For those receiving Yandex routes via Telecom Italia, Beltelecom inserted itself into Yandex-destined traffic from 22 May through 3 June of this year.
Consider the following example traceroute from Brazil to Yandex’s presence in Palo Alto, California before Beltelecom started leaking Yandex routes. The trace illustrates a typical traffic path, namely from Brazil to Miami and then on to New York and finally California.
trace from João Pessoa, Brazil to Yandex-Palo Alto at 05:46 May 20, 2014
2 18.104.22.168 (HostDime.com.br Data Center, João Pessoa, Brazil) 0.259ms
3 22.214.171.124 (SITECNET INFORMÁTICA LTDA, João Pessoa, Brazil) 8.122ms
4 126.96.36.199 188.8.131.52.static.impsat.net.br 54.263ms
5 184.108.40.206 po3-20G.ar2.MIA2.gblx.net (Miami, US) 165.525ms
6 220.127.116.11 xe-0-3-0.mia10.ip4.tinet.net 119.060ms
7 18.104.22.168 (GTT, New York) 179.928ms
8 22.214.171.124 servicenow-gw.ip4.gtt.net (New York, US) 199.109ms
9 126.96.36.199 poker-vlan801.yndx.net (Las Vegas, US) 187.579ms
10 188.8.131.52 (Yandex, Palo Alto, US) 179.933ms
11 184.108.40.206 spider-199-21-99-96.yandex.com (Palo Alto, US) 187.462ms
However, during the leak, traffic from the same server in Brazil to the same Yandex location in California was redirected to Beltelecom in Minsk, Belarus and then on to Yandex in Moscow, after which Yandex took the traffic to California on its internal backbone.
trace from João Pessoa, Brazil to Yandex-Palo Alto at 00:57 May 23, 2014
2 220.127.116.11 (HostDime.com.br Data Center, João Pessoa, Brazil) 0.249ms
3 18.104.22.168 (SITECNET INFORMÁTICA LTDA, João Pessoa, Brazil) 54.175ms
4 22.214.171.124 126.96.36.199.static.impsat.net.br 54.932ms
5 188.8.131.52 ae1-100G.ar4.GRU1.gblx.net (São Paulo, BR) 70.403ms
6 184.108.40.206 telecomitalia2.ar4.GRU1.gblx.net (São Paulo, BR) 54.192ms
7 220.127.116.11 xe-3-3-2.franco71.fra.seabone.net (Frankfurt) 220.925ms
8 18.104.22.168 beltelekom.franco71.fra.seabone.net (Frankfurt) 404.091ms
9 22.214.171.124 ie1.net.belpak.by (Minsk, BY) 252.911ms
10 126.96.36.199 core1.net.belpak.by (Minsk, BY) 254.373ms
11 188.8.131.52 100ge.core.belpak.by (Minsk, BY) 251.801ms
12 184.108.40.206 stat.byfly.by (Minsk, BY) 295.233ms
13 220.127.116.11 ugr-p3-te0-3-0-18.yndx.net (Moscow, RU) 266.851ms
14 18.104.22.168 ugr-p1-be1.yndx.net (Moscow, RU) 266.571ms
15 22.214.171.124 dante-ae3.yndx.net (Moscow, RU) 266.611ms
16 126.96.36.199 panas-xe-0-0-1-984.yndx.net (Moscow, RU) 276.847ms
17 188.8.131.52 (Yandex, Moscow, RU) 309.937ms
18 184.108.40.206 gretchen-xe-1-1-0.yndx.net (Germany) 281.413ms
19 220.127.116.11 ash1-c1-xe-0-0-1-985.yndx.net (Asheville, US) 356.142ms
20 18.104.22.168 whist-vlan801.yndx.net (Las Vegas, US) 356.994ms
21 22.214.171.124 (Yandex, Palo Alto, US) 346.466ms
22 126.96.36.199 spider-199-21-99-96.yandex.com (Palo Alto, US) 356.994ms
Did Yandex finally notice this after nearly two weeks of poor performance due to misdirected traffic? Did Beltelecom ultimately catch the error, after perhaps noticing a surge in traffic along its peering link with Yandex? We may never know the answers to these questions, but we easily quantified the impact to Yandex’s Internet performance using our continuous global measurement and monitoring platform.
Our next example of a peering relationship gone bad concerns Rascom’s leak of Telma’s routes this summer. Telma is the incumbent telecom of the African island-nation of Madagascar, and Rascom is a Russian fixed-line operator with a network that extends throughout Europe. While the previous example involved a very common type of peering relationship, between a content producer (Yandex) and a content consumer (Beltelecom), why on earth would a Russian ISP peer with an ISP from Madagascar? How much traffic could possibly be passing between these two networks? Could that traffic really justify a private connection to help to reduce transit costs? Probably not. The reason for this arrangement is likely due to the fact that both entities happen to be present at London Internet Exchange and decided to peer because … why not?
When a provider from a faraway place like Africa or the Middle East establishes a presence at one of the European IXes, it often will establish peering relationships with anyone and everyone. Once present at an IX, each additional connection carries little marginal cost, so you might as well connect with everybody there in the hopes of reducing your transit costs, if only slightly. But as we’ll see in this example, each of your peers has the potential to screw up and alter the flow of your Internet traffic. In other words, every relationship, no matter how seemingly insignificant, carries real risks. There is no free lunch on the Internet.
Consider the following example of a normal traffic path from New York to Telma in Madagascar, the day before the routing leak. Level 3 carries the traffic to London, where Telma picks it up and takes it first to Paris and then Madagascar.
trace from New York to Telma, Madagascar at 08:42 Aug 11, 2014
2 188.8.131.52 vlan725.car3.NewYork1.Level3.net 0.602ms
3 184.108.40.206 vlan70.csw2.NewYork1.Level3.net 69.191ms
4 220.127.116.11 ae-71-71.ebr1.NewYork1.Level3.net 69.274ms
5 18.104.22.168 ae-41-41.ebr2.London1.Level3.net 70.905ms
6 22.214.171.124 ae-56-221.csw2.London1.Level3.net 69.265ms
7 126.96.36.199 ae-25-52.car5.London1.Level3.net 181.282ms
8 188.8.131.52 TELMA.car5.London1.Level3.net 75.363ms
9 184.108.40.206 mx-480-lon-ae0-0-to-divinetwork.dts.mg 87.635ms
11 220.127.116.11 mx-480-par-ge-0-0-9-to-7710src12-th2.tgn.mg 93.228ms
12 18.104.22.168 mx-10-2-tul-so-1-2-3-to-mx-480-par.tgn.mg 280.859ms
13 22.214.171.124 p-galaxy-lag-10-to-mx-10-2-tul.tgn.mg 294.419ms
16 126.96.36.199 ademalinux.adema.mg (Madagascar) 296.248ms
This next trace illustrates the impact of the routing leak on the path and latency between New York and Madagascar. In this case, Tata takes the traffic to London and then Frankfurt before handing it off to Golden Telecom (Vimpelcom). Golden takes the traffic to Moscow and delivers it to Rascom, who takes it straight back to London (at LINX), handing it off to Telma so it can continue its journey to Madagascar. Wow!
trace from New York to Telma, Madagascar at 12:30 Aug 12, 2014
2 188.8.131.52 ix-11-3-5-0.tcore1.NTO-New-York.as6453.net 0.791ms
3 184.108.40.206 (Tata Communications, London) 85.475ms
4 220.127.116.11 if-2-2.tcore1.L78-London.as6453.net 85.733ms
6 18.104.22.168 if-2-2.tcore1.PVU-Paris.as6453.net 85.654ms
7 22.214.171.124 if-3-2.tcore1.FR0-Frankfurt.as6453.net 85.369ms
8 126.96.36.199 if-7-2.tcore1.FNM-Frankfurt.as6453.net 85.666ms
9 188.8.131.52 if-2-2.thar1.F2C-Frankfurt.as6453.net 85.332ms
10 184.108.40.206 (Tata Communications, Frankfurt, DE) 85.792ms
11 220.127.116.11 cat08.Moscow.gldn.net 130.005ms
12 18.104.22.168 HostLine2-gw.Moscow.gldn.net 131.454ms
13 22.214.171.124 (Rascom, Vyborg, RU) 129.323ms
15 126.96.36.199 ams-equ-cr1-to-stk.rascom.as20764.net 128.012ms
16 188.8.131.52 (London Internet Exchange (LINX)) 126.734ms
17 184.108.40.206 mx-480-lon-ae0-0-to-divinetwork.dts.mg 133.410ms
19 220.127.116.11 mx-480-par-ge-0-0-9-to-7710src12-th2.tgn.mg 153.67ms
20 18.104.22.168 mx-10-2-tul-so-1-2-3-to-mx-480-par.tgn.mg 356.605ms
21 22.214.171.124 p-galaxy-lag-10-to-mx-10-2-tul.tgn.mg 348.541ms
24 126.96.36.199 ademalinux.adema.mg (Madagascar) 350.924ms
We wouldn’t be surprised if the network engineers at both Rascom and Telma were completely unaware of this circuitous routing. This level of monitoring is often overlooked.
China Telecom—National LambdaRail
Although our final example isn’t recent, it is worth mentioning in this discussion. During the big China Telecom routing leak of April 2010 that caused an international stir, it is interesting to note where the bogus routes announced by China Telecom (AS23724) propagated the farthest. Before it ceased operations earlier this year, National LambdaRail (NLR) was a “high-speed national computer network owned and operated by the U.S. research and education community.” NLR also had a peering relationship with China Telecom, the state telecom of China. When NLR received the bogus origination announcements from its Chinese peer, it accepted them and routed traffic to China that was intended for numerous other locations around the world.
This is what can be most pernicious about routes received across peering links. Routes from peers are typically prioritized over routes from providers to avoid transit costs. While many, but certainly not all, transit providers filter the routes they receive from their customers in some manner, it is far less common for peers to do any filtering on the routes they exchange, largely due to the difficulty of determining appropriate routing behavior for an independent entity. These prioritized and unfiltered peer routes have the potential to cause the performance and security problems we’ve outlined here.
In a 2012 paper entitled “A Case Study of the China Telecom Incident”, I assisted the authors by searching and analyzing traceroute data from the iPlane project for examples of traceroutes that were sucked into China Telecom during the routing leak. Since much of iPlane’s data is generated using the networks of universities in the U.S. and many universities had a connection to NLR, there were many U.S. universities that had traffic redirected through China Telecom. As is standard practice, NLR had prioritized routes from its peers—including China Telecom. U.S. universities may have also prioritized routes from NLR over commercial transit links because NLR might have been a subsidized and therefore cheaper option. This provided a vector for those bogus routes to briefly (the entire incident lasted only 18 minutes) redirect traffic through China.
Here is an example traceroute pulled from iPlane traceroute data that illustrated the impact of the routing leak. Starting in Norman, Oklahoma this trace goes out to to Internet2 and onto NLR’s routers on the west coast of the US. There it hands the traffic off to China Telecom before returning it back to the US; it next appears in Cogent’s network in Chicago (ord) before making its way over to Boston.
0 188.8.131.52 (University of Oklahoma, Norman, US) 0.384ms
1 192.168.255.50 (RFC 1918) 0.287ms
2 192.168.255.233 (RFC 1918) 158.051ms
3 184.108.40.206 (OneNet, Oklahoma City, US) 0.364ms
4 220.127.116.11 (OneNet, Oklahoma City, US) 0.875ms
5 18.104.22.168 (OneNet, Oklahoma City, US) 3.025ms
6 22.214.171.124 (OneNet, Oklahoma City, US) 3.057ms
7 126.96.36.199 (OneNet, Oklahoma City, US) 40.005ms
8 188.8.131.52 (Oklahoma Regents, Oklahoma City, US) 18.231ms
9 184.108.40.206 ae-3.210.chic0.tr-cps.internet2.edu 18.699ms
10 220.127.116.11 (National LambdaRail, Los Angeles, US) 71.529ms
11 18.104.22.168 (National LambdaRail, Los Angeles, US) 71.614ms
12 22.214.171.124 (National LambdaRail, Los Angeles, US) 71.606ms
14 126.96.36.199 (China Telecom, Guangzhou, CN) 280.357ms
16 188.8.131.52 te0-7-0-6.ccr21.ord03.atlas.cogentco.com 296.328ms
17 184.108.40.206 te0-0-0-30.ccr22.yyz02.atlas.cogentco.com 294.124ms
18 220.127.116.11 be2242.ccr22.jfk05.atlas.cogentco.com 298.383ms
19 18.104.22.168 te0-7-0-5.ccr22.atl01.atlas.cogentco.com 315.434ms
20 22.214.171.124 te0-2-0-3.ccr22.dca01.atlas.cogentco.com 328.007ms
21 126.96.36.199 te0-7-0-35.ccr21.atl01.atlas.cogentco.com 335.061ms
22 188.8.131.52 te0-1-0-4.ccr22.bos01.atlas.cogentco.com 340.852ms
23 184.108.40.206 vl3808.na01.0.bos01.atlas.cogentco.com 339.829ms
24 220.127.116.11 (TA ASSOCIATES, Boston, US) 341.378ms
Next we provide a few examples of AS-level traceroutes (also based on the iPlane data) from U.S. universities impacted by the China Telecom routing leak, presented in a sequence-alignment style. In each sequence, there is a trace that was redirected through China Telecom (AS4134) by way of NLR (AS11164) on 8 April 2010. To illustrate the normal paths at that time, the errant one is sandwiched by AS-level traces seen on the previous and successive days.
AS path for 18.104.22.168 (planetlab2.cs.purdue.edu) to 22.214.171.124 (Advertinet, US)
[04/07/10] 17 ----- ----- 209 12067 (19.18 ms)
[04/08/10] 17 11164 4134 209 12067 (106.47 ms)
[04/09/10] 17 ----- ----- 209 12067 (22.04 ms)
University of California, Santa Cruz
AS path for 126.96.36.199 (planetslug3.cse.ucsc.edu) to 188.8.131.52 (Spartan Stores Inc., US)
[04/07/10] 5739 2152 11164 286 19151 26554 33372 (67.57 ms)
[04/08/10] 5739 2152 11164 4134 3356 26554 33372 (237.99 ms)
[04/09/10] 5739 2152 11164 286 19151 26554 33372 (66.35 ms)
University of Massachusetts
AS path for 184.108.40.206 (planetlab1.cs.umass.edu) to 220.127.116.11 (SCOTTRADE, US)
[04/07/10] 1249 ----- ----- 1239 3561 12221 (33.06 ms)
[04/08/10] 1249 22742 11164 4134 7018 12221 (247.83 ms)
[04/09/10] 1249 ----- ----- 1239 3561 12221 (44.78 ms)
University of Florida
AS path for 18.104.22.168 (planetlab2.acis.ufl.edu) to 22.214.171.124 (Bresnan Communications, LLC., US)
[04/07/10] 6356 ----- ----- 3356 7018 33588 (101.12 ms)
[04/08/10] 6356 11164 4134 174 7018 33588 (280.64 ms)
[04/09/10] 6356 ----- ----- 3356 7018 33588 (101.53 ms)
AS path for 126.96.36.199 (planetlab2.een.orst.edu) to 188.8.131.52 (Secure-Netz, DE)
[04/07/10 00:00:00] 4201 ----- 3701 3356 25074 (222.03 ms)
[04/08/10 00:00:00] 4201 11164 4134 3320 25074 (287.73 ms)
[04/09/10 00:00:00] 4201 ----- 3701 3356 25074 (170.01 ms)
AS path for 184.108.40.206 (planetlab2.eecs.northwestern.edu) to 220.127.116.11 (Copa Airlines, PA)
[04/07/10 00:00:00] 103 22335 3549 11556 26105 28031 (83.89 ms)
[04/08/10 00:00:00] 103 22335 11164 4134 26105 28031 (148.91 ms)
[04/09/10 00:00:00] 103 22335 3549 11556 26105 28031 (84.89 ms)
There were literally thousands more examples like these in the iPlane data.
In this blog post (and the last one), we don’t want to suggest we are somehow against peering. Peering is an essential feature of Internet connectivity and will continue to be in the future.
The main takeaway is that if your network is going to prioritize routes from a peer over a transit provider, then your network engineers should also take the time to set up appropriate filtering and monitoring of these links to ensure you don’t accept and act on bogus routes. As far as the routes you share with your peers, you need to monitor the paths traffic is taking to reach your network to determine if a peer is leaking your routes. Additionally, if you don’t exchange much traffic with another entity, it may not be worth peering with them just because you are both present at the same Internet exchange point. But if you must be promiscuous when peering, please use protection.
In the first article of the series, reliability and resiliency are covered. We should know that whatever device, link type or software you choose eventually they will fail. Thus designing resilient system is one of the most critical aspects of IT. I mentioned that one way of providing resiliency is redundancy. If we have redundant […]
He has more than 10 years in IT, and has worked on many network design and deployment projects. Orhan works as a freelance network instructor, for training you can add ' Orhan Ergun ' on skype.
In addition, Orhan is a:
Blogger at Network Computing.
Blogger and podcaster at Packet Pushers.
Manager of Google CCDE Group.
On Twitter @OrhanErgunCCDE
The lure of security groups is obvious: if you’re willing to change your network security paradigm, you can stop thinking in subnets and focus on specifying who can exchange what traffic (usually specified as TCP/UDP port#) with whom.Read more ...
Welcome to the November edition of Microsoft Patch Tuesday Summary. In this edition there are 14 updates; four are marked "Critical", eight are rated "Important" and two are rated "Moderate”. A total of 33 CVE's (Common Vulnerability and Exposure) were fixed over 14 bulletins this month. One of the Critical update MS14-064 addresses Sandworm related attack CVE-2014-6352 which was seen being exploited in the wild.
Here is a list of Security bulletins which were rolled out in today's Patch Tuesday release.