Will broadband have have to be rationed in the future?

In most developing countries electricity is rationed. Some call it load shedding. The electricity is switched iff by the authorities for a few hours.

Will broadband ever suffer the same fate? If not why not?

Ie: electricity is on but broadband isnt due to rationing.

It is already rationed, by the amount you’re willing to pay.

Wireless communications are already rationed by the FCC (or by equivalent agencies in other countries), and will probably continue to be so forever. There’s a finite supply of radio bandwidth (used in the original sense of that word), and while we can use it more efficiently, there’s a limit to that, too.

With wired communications, though (copper, fiber-optic, or whatever), if you need more data, you just run more wires. It’s possible that there are some underdeveloped nations where this isn’t done, and which need to ration it, but those are probably about the same places that are rationing electricity. In the first world, though, it shouldn’t be necessary.

Given the current capacity of fiber, the advances in various multiplexing schemes and inevitable advances I’d say we’re going to be fine for a while

Broadband (aka Internet access) is closer in concept to the wires that bring you the power than the power itself. So to a simple approximation, no.

But it is more complex than that. Like famous quote, the Internet does in many ways function like a lot of tubes. (The Intertubes :smiley: ) In this way it is a lot more like the water supply. Your ability to get water out of the water supply is restricted by the diameter of pipe you have coming from the water main. But only up to a point. If you have a pipe to the main that is as big as the main, or even bigger, you won’t get any additional water flow, and you will be restricted by the diameter of the pipes feeding your part of the supply network.

Similarly, your broadband access is already restricted. You share access to the Internet with all the other people in the street. Just as if everyone in the street turned on taps in their houses simultaneously, and noticed that the water pressure was low and the flow poor, if everyone in your street tried to stream movies at once, they would discover that the bandwidth was not as good. This comes about, in part because of the way each connection in the street interferes with one another, and in part because the upstream feed does not have the capacity to feed every single downstream feed at full speed. Telcos and ISPs try to balance things so that (just like the water supply) they don’t have to provision the network to the full sum of all possible simultaneous use.

Then of course, the big difference between Internet, water and electricity, is that there is no small number of very large suppliers of data for you to access. Not like a few reservoirs or a few power plants. There are millions. Some are very large and have entire data centres devoted to providing capacity, some are tiny and may be supplied from something as tiny as a mini-pc in someone’s home. In the new-fangled internet of things, you might be accessing a multitude tiny devices no more complex than a light bulb. If lots of people try to access the same supplier of content at once, that supplier could become overwhelmed. (eg the slashdot effect)

In the middle of this your ISP, and their upstream suppliers (the major long haul data carriers) all try to balance an ability to supply data versus the cost of building bandwidth. One thing that has allowed them to keep pace with the exponential growth of the Internet is that fibre optic cables are not a limitation to bandwidth. They are like having infinitely wide pipes in the ground. The only thing you have to do is work out how to pump water down them. So as electro-optical technology improves, all the carriers need to do is swap out the equipment at each end of a fibre backbone, and get ever more capacity out of the same length of optical fibre. The fibre cost a lot of money to lay, especially all those trans-oceanic links, so they get a massive boost in capacity for relatively not a lot of money (relatively - a termination unit can cost six to seven figures.)

So, rationing in the form of actual loss of connectivity isn’t what you will (or do) see. Now and into the future, what you will see is a lowering of performance as different parts of the network become saturated. Saturation can occur almost anywhere in the end-to-end link. Much more like the water pressure going bad than the electricity going off. (The electricity goes off because it isn’t possible to reduce the pressure - aka the voltage - as whilst your lights would just go dim, most other equipment can’t function unless it receives the designed voltage and power. So the power is shut off in rotation to ration it. Electricity supply is unusual in this respect.)

Not exactly. Hence the large contingent of terabyte/month users who pitch a giant fit every time Comcast or AT&T talks about capping data use at something near the median usage.

This. There was a similar non-crisis in the early 1960s when projections showed that telephone line assignments and traffic were accelerating at an exponential rate, and we were going to run out of copper long before the next increase - by some calculations, there wasn’t enough mineable copper on earth to handle the number of trunks and lines that would be demanded by 1970 or so.

Then they learned how to multiplex, going from something like four conversations per wiring pair to 16, and then larger multiples, until by 1970 the headroom for more expansion was something like 1000:1. Without melting down all the pennies for wire.

Actually, they are, in the same ways and for the same reason that copper or radio are. But the bandwidth limit for fiber is much, much greater than for copper. And even if you do somehow manage to reach it, you can just put down another fiber next to the first one.

Or, in many cases, start using the unused fiber laid down at the same time as the first fiber but never used up to now. There’s a lot of excess fiber capacity in most trunk routes. Because while you’ve got the trench open, you may as well install 2 or 3 or a dozen fibers, because the overwhelming majority of the cost is the ditch.

My small town just replaced an unbelievably antiquated IT infrastructure (copper + Comcast drops to every building - and that’s the GOOD part) with a fiber ring and runs. They needed one line. For convenience and separation of town and school traffic (separately funded by jealous entities), they used four to create a redundant fail-back network. They laid 96-strand on the main runs and 64-strand on the secondaries… because it cost no more to do so. I think that’s typical of most (sensible) installations of the last ten years. Even the comm industry can learn. :slight_smile:

True, but we are still a long way from reaching the sides of the pipe. Of course Shannon provides an absolute maximum, but between the optical bandwidth and the very good signal to noise, we are doing ridiculously well. We are only just beginning to start to properly exploit multiple wavelength modes.

The bigger question is why do they do that in developing countries, and not in the US?

The whole thing in the US is predicated on the idea that paying customers will go elsewhere if their power or internet is shut off. Or, in the case of regulated monopolies, will raise holy hell with their elected officials until the monopolists fall into line, lest they be replaced by a different company.

Broadband internet is similar, except typically even less regulated than electricity.

My ISP just sent me a notice that they will soon be offering 1 gigabit internet speed! (They currently offer up to 25 megabit speed.)

In the US, bandwidth limits have nothing to do with technology and everything to do with politics. The amount of bandwidth available to you is inversely related to the amount of lobbying money your local cable/telecom companies are willing to pay lock you in at slower speeds. That’s about it.

While there are certainly theoretical limitations to the physical technology, we are very, very far from them. They are slow and throttled because Comcast pays good money to lobby to ensure they stay that way, and to limit potential competitors, thereby maximizing shareholder returns and minimizing infrastructure expansion costs.

It’s not really like electricity or water in that there isn’t a “reservoir” of internet data that can be depleted. It’s more like the highway system: ISPs and telecoms just build the “roads” of the internet, but the actual data, like traffic, is supplied by other people using that already built infrastructure. There are some bottlenecks in the system used for routing and traffic control, much like onramps and merges, but at the end of the day it typically doesn’t cost them anymore to operate at 90% capacity vs 5%. If the average capacity gets near 100%, they do have to build more infrastructure (or at least open up more of the pre-built stuff), but that isn’t a severe challenge for them based on how easily they can match Google Fiber’s prices in the areas Google actually operate. Most of the cost is legislative and anti-competitive on purpose, not construction-related (having to do with our nation’s history with telecom and regulated monopolies).

TLDR too much congestion eventually requires additional bandwidth-handling infrastructure, but long before that, cable and internet companies will spend money lobbying to make sure no competitors arise.

If you want to see how societies can have much faster internet, look at Japan or any of several successful municipal fiber projects in the USA when local populations overthrow the telecom tyrants.

Which countries, that you know of, purposely turn off electricity?
Here in Peru, and in the 6 Latin American countries I’ve been to, we either get 24-hour electricity or in small towns, because it is produced by diesel-generators, electricity is switched off because it doesn’t make sense having them on with tiny actual use.

One construction company I’m aware of would run an extra 768-fiber cable in any conduits they built. Yup - 768. Not a dozen. One of the reasons I know about this is because there’s a couple un-terminated 768s of theirs in the next cage over from mine at a facility, and they’re constantly oozing icky-pic (think of bathtub caulk that never hardens) onto the floor and making a mess.

But it depends on what you consider a “trunk route” - I have dark fiber (from companies other than the the one I mention above) between various locations, and even between buildings in 2 nearby major cities, there is a reasonably-long lead time (normally a couple of months) as they go out and fusion splice (a fancy way of saying “melt the ends and stick 'em together in a precise manner”) fibers to build the path. In other words, just because company X has fiber in city A and City B, those fibers are not necessarily connected together where you want them to be. And in this case, that may still not be the case - one path I have between NY and NJ makes a pointless (to me) U-turn out through a manhole, about 10 miles south and the same ten miles back north into the same manhole, because they didn’t want to put a new splice closure in that manhole.

The same 2 fibers that I had carrying 100Mbit/sec a decade ago is now capable of carrying 400Gbit/sec today. I’m not using all of that because I don’t need it. This picture shows one such system in test at my house. On the far right, 2 fibers come out of one of the black boxes (“DWDM muxes”), go up to the top where intentional signal loss (“attenuators”) are added to simulate a 60km distance, and then into the second black box. The 40 pairs of connectors on each black box are 40 “channels” that all get combined onto the single pair between the boxes. As long it is hooked up to the correct channel, any sort of data goes in one end and out the other. The extra pair of connectors on the right side of each box are for diagnostic monitoring of the composite signal - I can plug an analyzer in there without affecting traffic and it will tell me signal levels on each of the 40 channels. Shortly after this testing was completed, the equipment was installed as an upgrade on one of our NY / NJ links.

Different ISPs have different policies on overcommitting service. As a company primarily serving business customers at 1GbE and 10GbE speeds, I guarantee each customer will get the capacity they’re paying for, regardless of what my other customers are doing. This does require careful monitoring tied to automated alarm systems to let us know when we need to either provision more capacity to that node, or to tell the customer “Do you know 85% of your bandwith is Netflix traffic?” Here is an anonymous sample 1GbE customer’s weekly traffic. As you can see, they have a good bit of headroom before anything needs to be done (which would normally be the customer ordering either a 10GbE upgrade to replace their 1GbE link, or one or more additional 1GbE links if that was more cost-effective for them).

Venezuela is one, per this New Yorker story.