Can someone explain bandwidth to me?

I can watch Late Night on my TV, with the signal coming through cable. I go to the Late Night website and try to watch a video clip, and even though that signal is also coming through cable, between “buffering” and “net congestion” it takes a good five minutes to watch a 30 second clip. My admittedly ignorant feeling is that a signal is a signal. If a television network can send full motion video to millions of viewers simultaneously, why can’t the same signal move at the same speed over the internet? The answer usually mentions bandwidth, but what exactly does that mean? Thanks.

bandwith, in the most simple terms, is how big of a pipe do you have to push stuff through

the more bandwith, the more data that can be send in the shorter peroid of time. a 56k modem connection has a relatively small bandwith, where as a cable connection has appx 10x - 30x the bandwith

“bandwidth”, as you are hearing it used, generically refers to how much information can be crammed onto a transmission medium per unit time. It tends to be a rather loosely applied term these days.

Key point - the television network is sending the SAME THING to millions of viewers simultaneously, ie. there’s only one copy of the program on the cable, being shared by everybody, ie. fixed “bandwidth”. Whereas the video you are downloading is your personal copy, which has to share the pipe with the personal copy of something else I may be downloading at the same time, ie. they share the available “bandwidth”. You are undoubtedly going to get a lot of, IMO, overly technical answers to this concerning exactly how the internet and TV signals are carried on the same cable. I’m trying to keep it conceptually simple deliberately.

Yabob, I have a different question.
To my simple mind, in Cmburns case, the signal is already in the cable. It comes to his house and, through the splitter, goes to the TV set. Why can it go to the computer? I can receive TV signals (via a separate cable, tough) into my PC (can watch TV on the screen). Why not via the Internet? Are the signals physically different (I thought they are all digital now).

It could be sent over the internet as streaming video or a file, but it sounds like cmburns’s limiting factor is his internet access method. Video uses a lot of bandwidth. You cable operator probabably is sending the show at 6 Mbps. If you are trying to download that stream at 56 kbps it could take a while. To improve the download time , he would need a faster connection: cable modem, xDSL…

The TV transimission is probably analog, but you would still need TV tuner or a set capable of receiving whatever type of transmission is being sent. There are monetary reasons that keep TV channels from putting their stations on the web. There are legal and regulatory reasons that make it hard to put other people’s TV stations on the web. As stated above, there are technical issues, too. Most people don’t have high-speed access to the web, so what good does it do to put a TV channel out there right now when few people can get it. (There are a lot of people experimenting with individual shows, though.)

I’ll see yabob’s answer, and raise him this:

A digital video signal contains lots of information. 640x480x24bpp at 30 frames per second is simply a huge amount of data. Even assuming 100:1 compression, that’s still about 2.2 megabits per second, and your cable modem is probably only capable of 500 kilobits per second (1/2 megabit). The solution is to use a smaller picture, lower quality, or fewer frames per second.

Encoding this digital signal in a form that can be sent over the analog cable also increases the amount of data that needs to be sent, and so does breaking the signal into packets that can be sent across the Internet.

One way to visualize bandwidth on a network is to think of a factory with an assembly line. Raw materials go in one end; finished products come out the other. The “bandwidth” of the factory is how many products it can make in a certain amount of time.

Notice that you can have more than one product being made at a time - each step on the line can be working on one partially-finished product. Let’s say you have six steps and each one takes ten minutes. Once you get the line up and running, you’ll have one product coming out every ten minutes, so the bandwidth is 6 products per hour.

Another measure of network speed is latency, which is the amount of time it takes for data to get from one place to another. The latency of this factory is one hour, since when a set of raw materials goes in one end of the line, it won’t come out as a finished product for 6 * 10 minutes.

Or, perhaps a simpler analogy: Latency is the length of the pipe from your water heater to your faucet (how long it takes for hot water to start coming out); bandwidth is the size of the pipe (how much water you can get).

Sounds like cmburns already has a cable modem, along with his cable TV.

I think the confusion lies in the fact that the cable box decodes a signal (analog I believe) while RealAudio Player, or whatever, has to receive and decode a bitstream.

Two totally independant and unique “signals” which both happen to be delivered via the same pipe…er, cable.

I’ll give this a try from a much more concrete perspective, without analogies…

Let’s say you have a wire which you want to use to send data over. Presume your system is a very rudimentary one, and you have two choices of what data to send over the wire: zero or one. So you choose this protocol: once per second, you will either put 5 volts on the wire, indicating “one”, or you will put -5 volts on the wire, indicating “zero”. Once per second, the person on the other end of the wire will measure the voltage and determine what “bit” of information you sent. If they measure 4.79 volts, they read that as a 1. If they measure -4.3 volts, they read that as a 0.

Using this method, you can send one bit of information every second, and send messages across. This is, however, very slow. So you get some faster hardware and increase the sampling rate to 1000 times per second. This gets you 1000 bits per second, written as 1kbps (1 kilobit per second). By doing this, you’ve increased your “bandwidth” 1000 times.

Note that you’re sending digital data (ones and zeros) across this wire, but you’re using an analog signal (a voltage which can vary continuously from -5v to 5v) to do it. The analog signal is simply interpreted as digital, based on its value.

You can also send an analog signal such as your TV program on this wire. The only difference is that instead of the analog signal being a representation of digital data, it’s a representation of the varying color levels (and sounds) on the screen, which is analog data.

In fact, through the wonders of modulation (which I won’t go into), you can send many such streams of data, digital or analog, on the same cable, each on a different frequency band.

A frequency band will be described as a range of frequencies, such as 343.25 MHz - 349.25 MHz. US TV channels are allocated in 6 MHz bands. We say that the “width” of the band is 6MHz. Given a frequency band of a certain width, a specific type of signaling technology has a maximum amount of data it can send. Current technology dictates that a standard TV channel can carry about 45 megabits per second of data. As a result, people equate “bandwidth” with “data-carrying capacity”.

When a show is on TV, it’s on everyone’s TV (in that area). It takes up one TV-show-sized chunk of bandwidth, which everyone can look at. When your computer talks to the internet, the “bandwidth” you use is allocated just to you. Sending it to your neighbor doesn’t help, because he’s looking at a different website right now. So if you wanted to download a TV-show-sized chunk of data from a web site, you’d need the equivalent of a TV-channel’s worth of bandwidth allocated just to you. If everyone in your neighborhood did this instead of tuning into the broadcast, the network would get clogged up in no time.

To throw my own dumb analogy into the ring, it’s kind of like the difference between holding a meeting to tell a bunch of people the same thing at once, versus telling them individually and wasting a bunch of your time. If one of them comes up and asks you what you have to say, you’ll respond to them individually, just like the web site would, but you’d prefer to just broadcast it.

Computer networks have the concept of multicast transmissions, which allow bandwidth to be shared in much the same way that TV broadcasts work, but it turns out that it’s not as useful as you’d think, since the on-demand nature of browsing the web means that if you click on the video clip to start watching, and your neighbor clicks on it 30 seconds later, you’re not watching it at the same time anyway, and the bandwidth can’t be saved.

As an aside, digital TV is simply the same 6MHz bands of data, carried on the same channel slots as analog cable, but if you tried to tune it on your TV, you’d just see snow. This is because it’s really digital MPEG data encoded on that analog waveform, and you need a special receiver to interpret it. In addition, the MPEG streams can be compressed at whatever quality level your cable operator deems appropriate, allowing the cable company to smash several programs into one 6MHz band by dropping the quality levels. They’d have you believe that “digital TV = higher quality”, but the true story is “digital TV = more channels = more ad money for Mr. Cable Operator”.

peace - if I understand your question, yes, the signals are different. For one thing, the job you are doing to deliver cable TV is simply to provide a constant input on several channels, digital or not. Once you figure out how you are going to divy up the available bandwith on the cable to carry the channels, your job is essentially done - to watch TV, you just separate out that channel, and play it. You can play on your TV or your computer with the appropriate card.

Whereas the internet on cable is a two way street - you ask for something, and you get it. And YOU get it, not your neighbor, who’s connected to the same pipe. This means you have to figure out ways for the data to flow both ways, and to tag the information as being headed for your house, not your neighbor’s. This is a lot more complex. It’s accomplished by having things called protocols which define how all this is determined, and the data sent in blocks called “packets”. To send out requests and obtain the data, you have to do so using the right protocols.

The “information highway” is actually a good analogy: think of it as a busy highway with a whole gang of cars (the packets) on it driving back and forth to and from different destinations along the road. When you ask to download something, a car shoots out your driveway containing the data which says “I want to download this clip of …”, and eventually drives to where the clip is. A whole parade of cars containing the clip then leave, destination your driveway, trunks stuffed full of the clip. The protocol can be thought of as the rules of the road so the cars don’t run into each other, and the rules for telling the drivers where to go. If the highway is particularly busy, the parade of cars containing your clip gets separated as other cars merge onto the highway between them, and some of them may be a bit delayed enroute to your living room.

Again, I leave technical details out of this deliberately. For one thing, I don’t know enough to answer detailed questions about transmission protocols. I’m a software guy who doesn’t really think about anything below message protocols, and usually not that far down.

Thank you, guys. Now I understand why the TV windows received over the Net are small and people moves like cartoons in a 16mm movie.

Thanks for the explanations, they were a lot more clear than any others I’ve read.

Something else to keep in mind.
TCP/IP was never designed to carry streamed audio & video files.

It is a great way for moving text and still graphics reliably, but the concept is not at all efficient for streamed media. In fact, the properties that make the Internet so great and reliable for static data are precisely what make them bad for streamed content: anything sent is broken up into packets, which are sent through the networks, often via completely different routes, and reassembled at the destination. The packets often don’t arrive in the right order (no problem at all with static data), and may travel entirely different paths. With streamed content, this is a major problem. So waiting for retransmissions, out of order packets, etc, can cause your slowdowns.

Also, Radio & TV Rx/Tx hardware is designed to handle one thing: a signal designed specifically for that hardware. TCP/IP are designed to be much more flexible and handle a very wide array of data types. To accomplish this, there is a lot of overhead data (source and destination addressing, error checking, transmission order, etc, etc. which are unnecessary with a broadcast medium such as TV), as well as data compression (additional processing time at both source and destination) which both can increase the total latency.