Theoretically: How could the internet break down?

I suppose the title explains it all, but I’m curious: theoretically, is it possible that one day we could wake up with one big broken internet? Where no one’s connection worked at all? Where every webpage gave you that pesky 404 error? What would this take? Every server in the world breaking down? A satilitte fender-bender of epic proportions? Is it even possible?

Discussing how to take things out is not very kosher in GQ, but all these things are well known and standard discussions in security circles. I’m not privvy to any secrets about how these systems are secured or what they might be vulnerable to, so I don’t have any secrets to reveal. The following is just basic network common sense.

We came fairly close a while back when a DoS attack targetted the DNS root servers and took a few of them offline. Of course, there are multiple root servers distributed geographically for exactly this reason, and taking them all out would be quite difficult. If that occurred, the DNS system would break down.

A satellite failure would have very little effect since not much traffic is handled via satellite. However, if you somehow managed to take out a few of the big NAPs like MAE-East and MAE-West where the big backbones tie together, you’d fragment the Internet and users would only be able to access sites on the same backbone. Of course, most backbones have redundant systems and taking out a single NAP would have little effect.

Both of these attacks (DNS system and NAPs) could be either physical or logical. You could attempt to physically disable the machines or you could target them with a software exploit. The former is very hard because of the distributed and redundant nature of the systems. The latter is more likely (but still very unlikely) because most of these systems run similar if not exactly the same hardware and if an exploit were found for, say, a particular high-end Cisco router, a lot of these systems might be vulnerable.

Both the DNS and the NAP attacks hit the network from the top. You can also hit the network from the bottom using virus/worm attacks. A particularly widespread worm could potentially take out a large number of users. The Internet would still exist and work perfectly well, but individual users would be unable to access it because their machines had been corrupted. Some of the big Windows exploits have taken out a not-insignificant number of machines and slowed the rest of the network to a crawl with their traffic.

Trust me: I’m not talking about taking things out. I just learned how to bold text in html.

As an aside, the protocols that the internet runs on (TCP/IP and related acronyms) are in generally VERY good at figuring out ways for the system as a whole to be ‘online’ even in the case of massive damage to routers, cables, etcetera etcetera. One of my networking textbooks had a story that in the first Gulf war, american generals were marvelling that the iraqi millitary network was still functioning even though 90% of its hardware had been destroyed in the bombing.

Turns out that the iraqis bought network routing hardware from american companies before the invasion of Kuwait. Moral: Don’t sell TCP/IP technology to the bad guys. :smiley:

Not sure if that’s entirely accurate, but it’s a funny story at least.

I believe you. I just brought it up because I’m not too sure myself where the mods would draw the line. IMO, a post like “well, if you park a truck bomb at…” would be inappropriate and probably get the poster sanctioned. On the other hand, saying that the Internet backbones are linked together at NAPs, and if one or more of those fail then the Internet changes dramatically is just basic networking.

If, hypothetically, the United States government were to veer off in a radically authoritatian and compulsory-nutcase religious-extremist direction (to an extent that even I, worry-wart that I am, find massively unlikely, I should add) …and it found the flow of information on the internet to be threatening, and suddenly moved to seize internet main-pipeline property, that could be extremely disruptive, especially if they moved fast.

I don’t see any other intentional party having any chance of doing so, nor of accident or natural disaster doing so either. Even when a significant portion of the US northeast / Canadian southeast from Great Lakes to New England went offline electrically, the internet kept right on ticking. IIRC, people who were able to get on the boards commented on how fast everything was all of a sudden :slight_smile:

Well, except for sites in that blacked out region, of course… :slight_smile:

404 means “page not found on server”. The only way for every webpage to give a 404 error is for every web page on every web server to be deleted or moved, but the web servers continue running. That one’s not likely to happen.

Outside of deliberate sabotage, the most likely scenario I can think of is somebody making a career-ending mistake while maintaining or upgrading either the master DNS database for the root servers, or the master WHOIS database for the core routers. Since these are all being constantly kept in sync with each other, a really spectacular mistake could potentially case faulty data to spread to all of the DNS servers / routers, and take out the Internet for a while.

This scenario is very unlikely, however, not only because the guys in charge of that stuff presumably have more rigorous procedures than “run this floppy for me, will ya”, but also because the synchronization of those servers is not instantaneous: it can take up to 24 hours for a DNS update or a change in the router tables to spread through the net, and by the time the last one would have become ‘infected’, the first ones would probably have been fixed already. So while a scenario like this would be very noticeable to everybody on Earth, the Internet would still be only crippled and not completely dead.

It’s a little late in the game to ask this, but, hell, if we’re combatting ignorance, let’s combat it. What’s a DNS server? More specifically, what’s that stand for?

DNS = Domain Name Service. It’s wat translates human-readable addresses like www.straightdope.com to computer-readable IP addresses like 69.20.125.245.

Cecil would be proud. My brain grows, ever so slightly.

Well, if I were Cecil, I would have included a few minor insults of your ancestry, the city of San Francisco, and humanity in general. And I would have suggested that next time, you may want to include a sawbuck with your message if you expect a speedy answer. But I’m not, so this’ll have to do.

Right, but an attack might redirect all traffic to a single machine that does not serve any documents, and instead returns the 404 error.

Just out of curiosity–is the Internet more or less vulnerable today than it was when it was created? Have things designed to improve the user experience made it more vulnerable?

I’d say the internet has gotten significantly more secure has time has gone on. Encryption and more robust authentication methods have been retrofitted onto existing protocols and entirely new “designed to be secure” protocols have been created to replace older ones (e.g., ssh replacing telnet). Firewalls, “demilterized zones,” proxies, NAT’ing, and other commonly used technologies serve to limit access from workstations to the internet-at-large and vica-versa. Computer security has become a new field in IT, and many large companies and just about any ISP worth its salt has a security department now.

As the population on the internet increased, the population of malicious assholes on the internet increased in proportion. For the most part the technology and practices have done a pretty good job keeping up with them, IMHO.

The internet protocols were specifically designed to keep going even if large sections were taken out by bombs or other calamities. For that reason, it was designed to be as de-centralized and nonhierarchical as possible: whenever a router is taken out, the other routers will detect that their brother has gone missing and will automatically re-arrange traffic through the parts of the network that are still up.

This resiliency is still with us today. However, there are two factors which may make the current Internet somewhat more vulnerable than the original ARPANET was.

First of all, the DNS system. Originally, when the Net consisted of just a few hundred hosts, every host kept a list of the name-to-IP mappings of all the other hosts, and those lists were synchronized more or less manually. Nowadays, we use DNS, which is de-centralized but hierarchical: take out the central root servers and it’s going to be awfully difficult to get around on the Net, even though all the hosts are still up and still able to communicate with each other.

Secondly, the current Internet topology is a bit more centralized than originally envisioned; there are a couple of large backbones which carry most of the traffic, and if those were taken out the rest of the Net would get awfully congested. Think cars and highways: theoretically you could probably reach any place in the US from any other place without ever using a major highway, but if everybody tried to do that at the same time, you’d have the mother of all traffic jams.

But apart from those two points, both rather theoretical, the Internet has stood up to the test of time better than its designers could have ever hoped. And I agree with Metacom that security has been significantly improved over time, so in practice the net may well be more secure today than it was then.

A third problem is that we’re rapidly running out of available IP addresses. Unlike the above two issues, this one is not theoretical at all, but it’s going to be a “things get worse and worse until we are forced to take drastic measures” thing, rather than a “sudden BOOM” think.

There is also a possible physical problem that might cause some serious problems for the internet. I used to work back east for AOL. The block I worked on had AOL and 2 other companies(Network Solutions and someone else IIRC) and an insane amount of US internet traffic went through that block. If someone were to somehow disable internet traffic on that block it would have caused some serious issues. The traffic would have been routed around the area but if that block went down it is possible that the re-routing would cause the loads on other machine to get too large and basically be a DOS attack. IIRC, there was also an issue with the backbones going to the Orient, there wasn’t suffecient backup. Take one of those out and there is a problem over there.

This would be a short term problem, couple of days most likely. And at the same time things have changed since I worked back there, more pipe is always being laid, so this is probably no longer an issue.

Slee

Shoot, didn’t anyone see Independence Day? Just have Will Smith and Jeff Goldblum plant one those nasty viruses somewhere on the net! If that can defeat the network of a superior race, it can bring down our puny Internet.

Yes, and no. The real problem is that huge blocks of IP addresses (of order 16 million) were allocated early on to companies and universities that don’t really need that many IP addresses.

Cite #1

Cite #2

A class-A network could have more than 16 million hosts on it. The ones that exist don’t seem to have anywhere near that.

Interestingly enough, MIT is having the same problem on a smaller scale.

As far as “bringing down the Internet” goes, I’m more worried about people turning away from e-mail and the Web en masse because of the spam and spyware issues. I love my e-mail (asynchronous communication without having to write by hand or find envelopes, stamps, pens that work, etc) and Web (can find out just about anything, anytime, anywhere). I’m hoping the convenience factor outweighs sifting through spam and avoiding spyware sites for most people. I’m not too worried, though.