15 Dec 2015 approx 1pm many internet providers had issues. A core provider Level 3 was affected which is a connector with many other carries. The exact cause of it was uncertain. This affected web access, email and other scattered issues.
At MAGNA we were inspecting servers and other services trying to determine it. Trace routes showed big blank sections going from our Milwaukee center. Dallas FT worth access seemed untouched.
Rackspace has this posted on their status site.
Potential External Provider Issue | ORD Region
On 15 December 2015, at approximately 12:47 CST, our network engineers became aware of an issue impacting an Internet Service Provider (ISP). This issue is occurring outside of our internal network, and at no point has the internal network been affected. Our data centers are designed to leverage multiple circuits of varying sizes in order to distribute and balance traffic efficiently. Rackspace cannot control the paths customer traffic takes outside of our network, and customers may continue to experience latency, packet loss, or dropped connections impacting traffic traveling over the affected provider.
So you weren’t going crazy and the IT staff was just probably running around trying to solve it. It was not their problem.
ZDnet has just published an article detail the problem Internet hiccups today? You’re not alone. Here’s why“
Essentialy summed up with BGP routing protocols hitting over the 512K route mark, we had an issue like this earlier this year in the US.
While an ISP maintenance activity may have played a factor, the real problem was that Border Gateway Protocol (BGP) routing tables have grown too large for some top-level Internet routers to handle. The result was that these routers could no longer properly handle Internet traffic.
Its been a wild day.