Is there any reason to use interlaced video nowdays?

Interlaced video was the norm in broadcasting analog days (30 half-frames per second, interlaced). But is there any point in creating interlaced video now, for broadcast or Internet purposes?

Our local public access cable channel can only broadcast in 720x480 pixels (480i/p), 29.97fps, 4:3 aspect ratio, and our video server that feeds the channel will only accept such. But it will accept bitrates of up to 15mb/s (8 is a reasonable max for most purposes) and either interlaced or progressive.

Interlaced video always bothered me. With the slightest bit of motion, the “comb” effect is quite obvious. So I have been using progressive settings (both camera and post-processing) exclusively for a while, whether the resulting video is going to be broadcast or posted on YouTube. Is there any reason why I should use interlaced instead?

The advantage of interlaced is the same as what it always has been. Most people can’t tell the difference between interlaced and non-interlaced, and the interlaced version has half of the data/bandwidth.

If you are on some crappy ISP (like, say, those of us who are forced to endure the pain that is Comcast…) then when your internet connection starts to get choked, either because some point on the network is overloaded or your ISP decides to throttle you because you actually want to use the bandwidth that they advertise, then you can end up in situations where the interlaced version will play fine, but the non-interlaced version will stutter a lot, because your computer has to download twice as much data per second to keep up.

The down side to interlaced video is that some folks (like you for example) can see the difference and find the interlacing to be annoying.

If ISPs like Comcast get their way, they will be allowed to throttle back internet traffic that doesn’t pay them a premium (they like to call this creating internet “fast lanes” when what they are really creating are the slow lanes), then video stuttering and reduced bandwidth may become a much bigger issue than it is now. That gets into the whole “net neutrality” issue.

engineer_computer_geek, are you sure that interlaced video is transmitted as such for Internet purposes? Since YouTube processes everything uploaded quite extensively, I’m not positive they don’t alter this parameter.

Sending twice as many half-frames as full frames in the same time frame isn’t going to change the overall bandwidth much anyway.

And Internet considerations don’t apply to cable broadcasting, so that’s really a separate question.

You will find that the TV signal was interlaced due to the physical properties of the phosphor AND the properties of the human eye.

With the phosphor being triggered , by the electrons, to glow 25 times a second, the fade (top to bottom) was too obvious and so they interlaced to get the brightness stable without requiring to make things actually faster . The fade was reduced LIKE it was a 50 fps signal, but from a 25fps signal. (the bandwidth was 25 fps… Motion quality is like 25 pfs)

With the dots on your electric circuit ( LCD/Plasma ) screens now permanently energized (no fading…) interlacing is totally redundant, and you cannot tell the difference between interlaced and non-interlaced (of the same resolution and fps.)

The comb effect is usually the result of the interlaced fields being shown (or rendered) out of proper sequence due to different/conflicting video standards - so rather than 1a,1b,2a,2b,3a,3b,4a etc. it ends up being 1a,2a,1b,2b,3a,4a,3b.

(where the numbers are the 25fps frames and the letters are the two interlaced fields they are all split into)

I don’t think so. I observe the comb effect only when there is rapid motion – which is to be expected – and only with signals processed by others. Ones processed by me, where I know they are NOT interlaced, do not exhibit that phenomenon.

Why would interlaced fields be shown out of sequence? Again, I observe this only with signals processed by others (where I assume they are interlaced), not by ones I process when I know they are not interlaced.

Nevertheless, it seems like there is no reason to deliberately process video in an interlaced fashion, and at least one reason to use progressive (smoother). I’m not seeing a downside to using progressive with today’s display, broadcast or distribution technology.

There are different standards for the ordering of the interlaced fields - if you apply the wrong standard when rendering the stream to another format, the interlaced fields are combined inappropriately and the comb effect (called ‘tearing’) is very pronounced.

For interlaced video where the frame rate is the same as the interlace refresh rate, there’s no way to simply combine them without tearing, but for, say, 25fps footage on a 50fps interlace, all you need to do is work out which two interlaced halves of the same frame to recombine - get it right and you get clean 25fps progressive, get it wrong and you get tearing.

for 50fps video with 50fps interlacing, you have to double the fields or otherwise reconstruct the full frames.

Discussed in some detail here: