why interlaced scanning in TVs?

and not progressive scanning?

Mainly because interlaced scanning is cheaper. The guns don’t need to be as precise and with TV, the resolution is so low, it makes no difference if the lines are refreshed in one pass or two.


How do you like that! And without so much as a “Kiss my foot” or “Have an apple”!

NTSC has a low vertical scan rate. Interlace reduces the visible flicker for typical TV type content (although it’s still quite visible on certain things like thin horizontal lines esp. with high contrast).

PAL is even lower scan rate.


peas on earth

NTSC has a low vertical scan rate. Interlace reduces the visible flicker for typical TV type content (although it’s still quite visible on certain things like thin horizontal lines esp. with high contrast).

PAL is even lower scan rate.


peas on earth

Boy that takes me back. I never asked the question but I remember the instructor offering the answer anyway: the scanning method was changed to interlace to help prevent flicker such as you used to see in old-timey movies. Apparently when the lines were scanned in one after another in consecutive order, there was a subtle but distinct flicker. That’s been a standard ever since. I have not heard of progressive scanning but I must confess that I have not kept up with advances in video broadcast technology ever since I decided to become strictly an audio person.

The rate at which the signal comes into the TV means you get one full screen of data every thirtieth of a second. If you just started at the top and worked your way down, by the time the guns were painting the bottom of the screen the phosphor at the top would have faded significantly. The picture would therefore flicker, as other posters have stated.

I don’t know WHY you get one full image every thirtieth of a second. It may have something to do with bandwidth, or the technology available when TV was first developed may not have supported anything higher, or maybe somebody pulled a number out of the air. But given that constraint, the reason for interlacing is pretty straightforward: if you do it that way, the fading isn’t as obvious.

You could also use a longer-persistance phosphor instead of interlacing, but then you’d get residue of the previous image mixed with the current image, which is not necessarily a huge stride forward compared to flicker. You get that with interlacing too, but the images differ by only 1/60th of a second instead of 1/30th of a second so the blurring is considerably less.

Why a new frame every 1/30 of a second (and a new field every 1/60)? Frequency!

AC power systems in the US operate at 60 cycles per second. By tying the television frame rate to the same frequency (actually a multiple), the opertunity for interference is greatly reduced. In Europe the power frequency is 50 cycles and the television frame rate is 25 frames per second.

And as far as interlaced scanning, it’s the phospher thing. At 1/30th of a second, the glowing phosphers at the top of the screen would fade by the time the scan gun reached the bottom. Scan only half the lines in half the time, though, and suddenly you have a real image.

Of course today, that’s no longer necessarily true, but we have to keep our 525 line, 30 fps legacy system for compatibility sake. At least we did until this DTV business began.

why 30 fps?
i think thats because that the eye has a memory of 1/24 th of a second (I remember this from my 5th grade!!). Less than 24 would cause flicker, and greater than that, say 40 would have caused confusion (the projector funda.)
So the final(?) verdict: when the TV screens were first being manufactured the phosphur used needed to be refreshed sooner than that could be acheived with progressive scanning (= normal scanning).


  • Message NOT scanned for typos…

Well, you’ve caused confusion here. Why was 40 bad?


It is too clear, and so it is hard to see.

well, the eye has one image in memory. Showing a different image, before it loses the image, will confuse it, right? Or will the new image simply be discarded?
I was just making a guess…

  • Message NOT scanned for typos…

My monitor is refreshing at 75 times per second and I’m confused. In the real world, images are changing constantly and the eye and the brain sorts them out. And, yes, I’m confused in the real world too. But, anytime you refresh images at near the retentivity of the retina or the brain’s ability to process it, then you are likely to notice flicker.
IIRC, A 16mm protector showed the same frame twice before advancing because the cameras couldn’t produce the frames fast enough so to avoid flicker they just showed the same one twice. It’s an easy technique to use less information and still fool the brain into thinking it is seeing normal motion.

Oh well, looks like I was hasty in saying what happens with a refresh rate of greater that 24/sec.
Sorry.

  • Message NOT scanned for typos…

dwtno is correct in that the frame rate is related to the powerline frequency. The main reason for this is so the frame rate of studio cameras can be matched to the frame rate of your TV. Since (virtually) everone’s AC power is phaselocked to everyone else’s (because they’re all on the same power grid), everyone has a common 60Hz (in the US) timebase. If both studio cameras and home TVs both use the common 60Hz to time the frame rate, then everybody stays in sync. The problem otherwise is that even a very very small mismatch is visible- the picture would slowly crawl up or down the screen. The powerline frequency actually wanders around a little, but as long as everyone is locked to it, it doesn’t matter. I’m not sure it is as much of an issue today, but in the early days of TV it was a very handy way for everyone to stay in sync.

Arjuna34

Arjuna34: I don’t think that’s quite right or I misunderstood you. IIRC, the display on the screen is stabilized by the horizonal and vertical sync pulses that are part of the broadcast signal, generated by the camera. The use of 60Hz is to stabilize distortions and interference. Poorly filtered power supplies have a 60Hz ripple on them and noise is more noticeble at the peaks. So, basing the display on 60Hz keeps the effects at the same place on the screen and a lot less noticable. Who cares if the top 10 scan lines are a little closer together. You are right that it was a much bigger problem when the internal circuitry was analog instead of digital.