When a TV show “freezes up” or pixelates, it seems the cable channel can fix the proper in an instant in real time whereas I can only wish I could fix a bad DVD on my DVD player! So, do they run multiple copies of the show being aired so that they can flip from a bad copy to a clean copy virtually on the fly? What keeps a cable channel “clicking along”?
No, they’re not playing from a physical medium like a DVD. They’re streaming video from computer memory* using a compression scheme such as MP4, in which bandwidth is reduced by only transmitting the portions of the picture that have changed. When the transmission is interrupted, your TV keeps displaying the portions that haven’t, or the last complete frame, until it perceives that a new image can be assembled.
*Though the full show may be stored on a spinning hard drive, it’s being spooled into memory a bit ahead of what’s going out over the cable.
On the other hand, the broadcaster may well be using a “playout” server with redundant hard drives, memory, power supply and everything else. Compared to what they are selling, the equipment is cheap.
Pixelation on a cable channel is almost always due to downstream transmission issues, not to anything at the provider’s location.
In a compressed video stream, there are periodically certain “key” frames, called I-frames that are a whole picture. Subsequent frames (P-frames), like Mr Downtown points out, just give the difference between that frame and the previous one.
When there’s a glitch in the signal, some parts of the frame are messed up until the next I-frame comes along. Usually there’s a new I-frame when there is a cut in a film and several times during long shots. So often the picture gets fixed when the shot changes or after a certain number of frames.
But that’s the condensed version of things. Newer compression methods use other tricks to get take things even further.
There are several places where the stream can get mangled. For a satellite channel it can be during the upload link to the bird or on the downlink to the cable company/user’s dish. For OTA, it can be between the network and the local affiliate (almost certainly a satellite link), between the local station and the cable company. And then finally between the cable company and the user.
So, is the “update only the parts of the picture that are moving” compression strategy the reason that confetti totally messes up the picture? Because when confetti is blowing around, the entire picture is changing, constantly.
This deserves to be repeated. Jinx, you’re not experiencing bad DVDs played by the TV-channel, you’re observing temporary signal disturbances. In the old days it would be someone bumping the rabbit ears on the TV, today it’s some disturbance along the lengthy and many branched signal path from the provider to you.
As it seems is more and more often the case, there’s a Tom Scott video for that.
Yes. Also, I’ve found that with h.265 at 720p, it slows my phone down below realtime when there are high-contrast horizontal stripes in the video, such as tall forests in snow. This is what led me to notice how damn many TV series lately have scenes that take place in tall forests in snow. (Orphan Black, for example.)
The better the compression method, the more demanding it is on the processor.
H.264 and H.265 are very good compression methods and therefore very demanding.
If the scene is simple: a white cow in the fog, the compression is simple and the processor doesn’t sweat it. But if there is some complexity to it, then the processor has to work hard and may not keep up.
Flashes on screen can also show compression artifacts. E.g., explosions, camera flashes, etc.
My favorite compression artifact: watching a scene in a movie where there are pillars in the background (e.g., Cleopatra). The camera slowly moves but the pillars stay put and then suddenly jerk to new positions a few pixels over.
At last! A question I know something about.
I’m going to be very non-technical in this answer. Really.
All compression methods are geared to the type of data intended for compression. It is impossible to use a given compression method to compress ALL material without loss or degradation. In other words, I can design a compression method that will convert a file of size (X) bytes to (X-1) bytes without any loss of data, but it can’t work for ALL data types. If you map the set of original data to compressed data, there will always be SOME types of data that don’t compress without loss.
The “confetti” effect (or fog, clouds, explosions, etc.) is a result of this. Rapidly changing scenes with significant detail do not compress well. You get macroblocking and other compression artifacts. It is certainly POSSIBLE to construct a compression method that will compress these scenes efficiently, but there’s very little benefit in doing so.
TL;DR: Explosions, fog and confetti are difficult to compress. These scenes are excellent ways to evaluate how your compression algorithm works.
When did they stop using (professional) Beta tapes?
Most broadcasters dumped tape when they transitioned to HD.
Wouldn’t the glitches/artifacts people have mentioned still be due to an interruption of the video stream? As a user, if I compress a video using x265 and set the “quality” to, say, 28, then I expect the entire video to be encoded at that quality, obviously using more bandwidth for complex scenes, and definitely no pixellation or blocking as if I were trying to stuff a feature-length film onto a single CD as an MPEG-2.
The problem is that bandwidth on broadcast/satellite/cable is a finite resource. Companies would rather cram in more channels than higher resolution. So they don’t use the highest quality/highest bitrate settings–and may even recompress stuff as they stream it out.