Why are digital TV signals more efficient than analog ones?

One of the major advantages of switching from analog TV broadcast to digital seems to be that digital signals require less bandwidth, freeing space up for additional broadcast channels.

Can anyone explain why digital signals require less bandwidth than analog ones? More importantly, can this explanation be dumbed down considerably so that someone (i.e., me) with only a tenuous grasp of how broadcasting works can understand it – or at least get the general idea?

Thanks!

One reason is compression. Digital signals can be compressed using various techniques, which effectively shrink the size of the signal: Digital Compression - How Digital Television Works | HowStuffWorks The same can’t be done for analog signals (from what I gather).

So I take it the analog signal cannot be compressed – does this mean that the analog signal contains more information, and thus could potentially lead to a sharper picture (providing the receiver could interpret all that information), or does it not work that way?

Not really, the Digital signal has error correction and some redundancy which eliminates static and ghosting though you can get random blockiness instead. The analog signal is just inefficient and the digital is hopefully wide enough to allow for enough redundancy to really improve the picture.

The analog signal would have all of the information originally - but feeding this into a radio signal would take up massive amounts of bandwidth. Remember that the original TV signals were themselves compressed - they would only transmit a few hundred lines of resolution even in analog form.

The problem with analog signals is that they are WIDE. They are shaped like waves, similar to the waves atthe ocean. They move back and forth all the way from here to all the way over there. But despite their big size, they don’t really carry much information. The only real information in a wave is its height (from top to bottom) and its width (from one wave to the next).

In a digital system, the broadcasting equipment takes each wave, calculates the height and width, and instead of broadcasting the wave itself, it just broadcasts the numbers as a stream of ones and zeros. This takes almost no width at all, which allows us to crowd many more stations into the same section of the radio spectrum. Then the receiving equipment translates those numbers, and figures out what size waves it represents, and recreates the original sound and/or picture.

The above is WAY over-simplified, but I hope it gives some sense of why digital is more efficient than analog.

Analog is a direct, point-for-point representation of the original. Digital is an abstraction of the original - a description, if you will. Because of this, digital can take some shortcuts.

Consider a still image that stays on screen for several seconds. Analog is obliged to reproduce this scene over and over, because that is what the original source (camera, still-store analog output, whatever) is doing. Digital, on the other hand, can get away with describing the first frame of this sequence, say “repeat this image 'til I tell you otherwise”, then go take a coffee break. While this is an extreme example, this is how compression works - repetitive elements are encoded in a sort of shorthand.

A follow-on question: cable companies use digital compression on standard definition TV to fit more channels in than they could with analog. How are they going to deal with 90 channels of HDTV, which takes up as much space as analog?

Well, there’s data *compression *and data reduction. Compression is O.K., but you tend to get the most benefit from data reduction. In data reduction, you are literally throwing away information, but you hope the end result looks almost as good as the original. It’s a matter of tradeoffs: the more information you throw away, the less bandwidth you use, but the worse the picture becomes.

Analog signals ARE compressed! The compression is just much more rudimentary. Digital video compression does some “incredible” stuff. Like take a frame, and figure out how pieces of it will move by the time the next frame arrives.

It’s not true, though, that digital signals are easier to send. Basically, by the time a high-speed digital signal gets into the air, it is analog. Both because its encoding is far more complex than simple on-off, and because distortion makes it fuzzy. Many people mention error correction and digital’s property of “if there’s few enough errors, it’s like there’s no errors at all,” but that’s not much of a strength. It’s more a limitation. Digital breaks without error correction, while analog deals with signal degradation gracefully.

It’s really about the gee-wiz compression.

None of the above. Both an NTSC (analog) signal and an ATSC (digital) signal use 6 MHz of bandwidth. The reason that ATSC is more spectrum efficient than NTSC is that the active channels can be packed more tightly in frequency and space without producing unacceptable levels of interference. The geographic distance between two stations on the same frequency can be smaller and the frequency separation between two stations in the same area can be less. This allowed the FCC to put the same number of TV stations in a smaller number of channels. The spectrum savings are due to the reduction in unused channels.

Is this inherent in a digital versus analog channel, or is it just that antenna/receiver technology has improved enough since NTSC was defined that we can pack channels closer in frequency? (Or, some of both)?

Which is pretty much the definition of compression. The digital signal’s “size” is effectively shrunk so that more of it can fit in the same amount of space.

TV receivers have improved over the years but I think it is primarily the fact that digital signals are much more resistant to interference, especially when they are protected by sophisticated error correcting codes. With an analog signal, you get snow, ghosts, and all sorts of visible and audible effects from interference. Most of the problems that people have with digital TV are due to weak signals and multipath, not interference from other television stations.

With ATSC, you’re not packing channels closer in frequency, you’re multiplexing more channels into the given bandwidth. NTSC required 6 MHz because it used about 4.5 MHz of bandwidth to send the video & audio analog signals, and having some guard band distance between frequencies was a good thing. With ATSC, they (wisely) decided to keep the same frequency spacing, but found that if you compress the signal enough, you can get more than one video/audio stream into the 6 MHz. So when you’re sending digital data, each packet has a header that says something like “this packet is video for program 1”, “this packet is video for program 2” etc. So if you use one of the new ATSC tuner boxes, and set it to find all the channels it can, you’ll find channels like 4.1, 4.2, 4.3, etc. That means Channel 4 is currently sending 3 different programs on the same frequency.

Spectrum analyser display of an ATSC signal:

http://www.8vsb.com/

Cable companies use rate shaping, statistical multiplexing, and switched digital video to manage bandwidth. Rate shaping is processing the input signal in the digital domain to produce a new stream with a lower bitrate. Statistical multiplexing essentially borrows bandwidth from other channels when it’s needed to encode a high-complexity video stream on another channel. Both of these techniques can have an impact on the quality of the picture the cable customer sees. Switched digital video relies on an intelligent network, and drops entire channels from the cable loop entirely when no one is tuned to them, freeing the bandwidth for other uses.

More here.

Digital signals are not more resistant to interference. Basically you get it or you don’t. If you don’t get a good picture you get nothing.

Analog you will get less of a signal and more snow, but it still may be watchable. Digital suffers many setbacks over analog. For instance, where I live in Chicago, with rabbit ears I get 16 analog channels and NO digital channels. Digital has a difficult time going through all the buildings of high density. You don’t get enough signal without a large antenna. Since nationwide 85% of people have cable or a dish this isn’t a huge concern for the FCC

Analog (called NTSC) signals are old technology from the 40s. Digital for examples allow you to put channels next to each other, NTSC does not. (Channels 4 and 5 / Channels 6 and 7 / Channels 13 and 14 do appear in some cities but they are not actually next to each other. For instance the entire FM band lies between channel 6 and 7)

Of course with today’s technology and tuners you could put analog channels next to each other. Digital channels can be next to each other provided that they are at the same antenna farm.

Originally the FCC plan was to require HDTV not digital. Therefore each station would use it’s bandwidth to broadcast HDTV. Then the FCC allowed stations to CHOOSE to broadcast High def or not. So if they don’t broadcast high def they can broadcast up to 6 channels in the same bandwidth as one high def. Some stations try to broadcast high def as well as other channels, but right now that doesn’t work well.

Other problems include our digital TV uses Mpeg2 and we are already at MPEG4 so even our use of digital TV today is outdated and in an estimated 10 to 20 we will have to switch again.

Another problem is digital channel allocation is concurrent with analog. This means the stations weren’t assigned for best efficency. For instance many stations will have to use directional antennas to avoid problems, thus the coverage of digital won’t replicate analog.

No one really knows till after Feb 17th. Some stations still are not broadcasting digital on full power.

My local NBC station does this and it seems to work well. 30-1 is the HDTV NBC network feed, 30-2 is SD NBC Weather loop , and 30-3 is SD NBC Universal sports channel. They didn’t use the 3rd one during the Olympics though, so maybe that was a case where they needed the extra bandwidth for the main channel.