Theoretically: How could the internet break down?

Didn’t some construction worker (accidentally) sever a backbone line a few years ago?
The net didn’t go down, but it slowed considerably. And IIRC you couldn’t get to some places from other places. (i.e if you were on one side of the getting to the other side was problematical)

Not what I’m thinking about (I think):,1282,5385,00.html

I think this is it:


Is it possible to calculate how many people-on average of course- are using the internet at any one time?
Tens of millions, hundreds of millions…

Well, we’ve got a great attempt going over here. :smiley:

Not even in principle, if you want to differentiate between robots and humans.

You might be able to get every ISP worldwide to send in a headcount of currently used connections and use some statistical wangling to get a reasonable number of the amount of people actively using those connections at that time, but it could easily be off by an order of magnitude or two.

The lesson is that the Internet is decentralized enough (and automated enough) to make this effectively impossible.

(If you were asking about intranets that use TCP/IP and other Internet protocols in addition to the Internet itself, your problem is flatly insoluble.)

E-mail and the Web are far from the only Internet applications. If they both went away tomorrow, the Internet would survive.

I don’t think they’ll go away, myself. They might shrink and become less congested, but they are too useful to just die. Hell, FidoNet is still with us.

I don’t think they’ll go away, myself. They might shrink and become less congested, but they are too useful to just die. Hell, FidoNet is still with us.

FidoNet stills exists? Cool! I haven’t even checked on that for a loooong time!
good to hear.

properly configured DNS was designed to die gracefully if all the root servers went down. The internet wouldn’t all of a sudden die, instead with no preventative measures, what you would find is that over a period of a few weeks, more and more sites would become inaccessible.

Unless theres a nuclear war on, its very hard to imagine the root servers going down for weeks at a time.

Also interestingly enough, MIT happens to be one of those selfish class-A owners. We have the entire 18.x.x.x space.

Oh, and Anne Neville, that link is a joke. Voodoo is the “MIT Journal of Humor”.

So, if you theoretically kept a list of all the actual numerical IP addresses to the servers you need to contact, you could still contact them just fine? (Even though there would be no DNS server to convert “” to “”?)

Then again, given that the ARPANET (the original Internet) was largely developed by MIT in the first place, who are we to begrudge it them if they decide to keep 0.4% (1/256) of the address space to themselves, for whatever reason?

Yes. Also, you can keep a list of name-to-address mappings of your own, and use that to do the translation.

In Windows, the place to put such mappings is in C:\Windows\system32\drivers\etc\hosts (replace C:\Windows with the location of your Windows system directory). Mappings placed there will be found and used by your computer even if every DNS server in the world is down. The obvious downside is that if the Straight Dope people decide to move their machine to another IP address in the future, your machine will still be trying to use the old address and you’ll be out of luck.

Some people use this file as a quick-and-dirty way to prevent their computer from accessing locations they don’t want it to access. For example, if you were to put the line

in your hosts file, any attempt by a website to display an ad from will be redirected to, which happens to be your own machine.

That’s already been thought about, and has been in the works for over ten years. With IPv6, we’ll have the technical ability to abandon all of the Class A, B, C, whatever subnets as well as NAT, (Network Address Translation) which enables however many hundreds or thousands of computers to access the internet from one IP.

I forget how many kazillion unique addresses IPv6 will allow, but it’ll be more than enough for every home PC, network server, VOIP phone, cellphone, PDA, Web-enabled refrigerator and whatever else to have its own address for quite a long time.

Just looked it up - our current IPv4 supports about four billion addresses - 4 x 10[sup]9[/sup]. IPv6 supports about 3.4 x 10[sup]38[/sup] addresses - a number so big, its name is silly - 340 undecillion.

Yup, IPv6 is the “drastic measures” I was thinking about. It’s been available for quite a while now (my provider, XS4ALL, has been offering an experimental IPv6 gateway for several years already), but mainstream adoption is going a lot slower than was originally hoped. I suspect that it won’t really happen until various parties start feeling some actual pain.

I seem to recall a bit of a scare about 3-4 years ago, when it was discovered that an entire IOS (IOS: Cisco’s operating system) release train had been compiled and released with some developer’s backdoor SNMP R/W strings still in place :eek: - does anybody else remember that ?

I have seen an enthusiastic engineer do some damage on Germany’s Commercial Internet Exchange (DECIX) by announcing with a very low cost to hundreds of BGP peers, some of whom did not (at the time) have filtering in place. Hilarity ensued. (In practice, this is like telling most of Germany’s ISPs that you will carry Internet traffic to anywhere for nothing. Some of the more trusting companies hadn’t configured their routers to turn down such an enticing offer, so they automatically redirected all their customer traffic to the unlucky company in question, obviously swamping the link. A good time was had by all - except possibly a few million Internet users and a couple of dozen of engineers.)

Made for a good story over beer and pretzels.

Did somebody try to take down the internet today? :confused: I heard something on CNN about somebody in Sweden doing something to DoD computers, but I can’t log onto Actually alot of websites are down (from my perspective), and I was wondering if it was just my ISP.

That Birdmonster sure is a fast learner! :smiley:

To give a bit of perspective: I’m at a university, and all of the internet traffic at the university goes out through one or two lines to the rest of the world. It happens occasionally, then, that the Internet connection to the rest of the world is severed. I can’t get to google, or, or, or anything else anywhere in the world. But if I open my browser and point it to, or anything else on campus, I get through without a hitch, without my computer even noticing there’s anything wrong. I’m still using the Internet, just a very small portion of it which happens to be temporarily disconnected from the rest.

If some calamity were to wipe out every piece of electronic equipment outside of Bozeman, there would still be the Internet, in Bozeman. And so long as there exists any connection at all between two points, the Internet connects them. Most points are currently connected by a wide variety of connections, and if most of them were taken down somehow, network traffic would be severely congested, but it could in principle get through, if timeouts were set patiently enough. And there are many, many different ways to send Internet traffic: There’s even a protocol in place for sending Internet traffic via carrier pigeon! (admittedly, that one’s not used much)

And in my case I can access my college’s site, Straight Dope, the Yahoo frontpage, the BBC’s homepage, and very little else. Is there something about these particular webpages that lets me acces them? It’s very strange.

That’s what I was thinking too. Caching and other techniques would tend to keep things from instantly crashing, and would let it fail gracefully.

The only thing that I could see putting a really serious dent in the Internet would be something like an EMP attack on the US, causing really colossal electronics failures and the like.

Still, even at that, the rest of the world’s networks would keep chugging along.

I’m guessing you access those pages often enough your computer’s internal DNS cache knows the real (IP) address of the machine that serves the page. In that case, it can send information to them directly and not bother with asking your ISP, which obviously doesn’t know.

If your ISP’s DNS cache has been invalidated, either intentionally or by accident, it would cause a very large amount of unreachable sites until it’s fixed. I hope it gets fixed soon.