What resolution is TV filmstock?

Subject says it all really, bu to expand, could you put an old TV filmstock tape (not a VCR recording but the film used in the camera) in a high resolution scanner and broadcast it in HD (or write it to BD) and have it appear as HD?

Motion Picture Film Scanner. Single-camera television shows are typically shot on 35mm film.

OP’s asking for the resolution of the film. Your link doesn’t make it clear. I take it, though, that it’s higher than 4096*3072. Is that, in turn, comparable with HD resolutions?

-FrL-

OP’s asking for the resolution of the film. Your link doesn’t make it clear. I take it, though, that it’s higher than 4096*3072. Is that, in turn, comparable with HD resolutions?

-FrL-

ETA: Yes it is.

It depends. How’s that for a factual answer?
HBO still shoots some of their stuff with 35mm stock. That makes the transition to HD smoothly.

OTOH shows like Dr. Quinn Medicine woman would probably NOT make the cut to HD. Because of all the outdoor shooting film was the most covenant format, but to save money the studio used 16mm stock.

HD vs. 35mm film resolution (sorry for just posting links, but I’m getting ready to leave for the weekend–dealing with stuff like this, actually–and don’t have time to elaborate).

The resolution of either 35mm or Super 16mm film will surpass even 1080p HD video.

1080 HD (makes no matter if 1080i or 1080p) has a vertical resolution of 1080 lines. The overall image is 1920 x 1080 pixels.

720p HD is a 1280x720 pixel image.

Super 16’s resolution is approximately 1400 lines, and 35mm’s resolution is well beyond that, at about 4,000 lines.

There is a lot of misleading information spread about by the film companies. As expected, they stack the deck and the “resolution” figures they toss around have virtually nothing to do with film usage in the real world. Shooting a chart in a lab using a camera that has never left that lab and showing it via a very well-maintained projector in a demo theater is a lot different than the real world. There are so many steps in the 35mm chain that degrade the image that HD shot, edited and digitally projected frequently has more detail.

That said, 35mm used as input to the digital chain, with the developed negative digitally scanned and the rest of the chain digital can look very nice indeed. But film for motion picture shooting is dying, as throughly as it is dying for still photography. It’s just taking longer.

So they could take all those reels of Yes, Minister or The West Wing or Doctor Who and rescan them as HD and we’d get a very high quality picture?

Nope, because they’ll generally have been shot on something other than 35mm (which is pretty expensive). Yes, Minister, for example, was shot primarily on one inch video, with occasional 16mm segments. The DVD masters are almost certainly about as high quality as you’ll ever see. Doctor Who will be a similar story (the old ones, at least - I don’t know how they’re shooting the new ones, but I suspect digital video).

This article mentions in passing that The West Wing was indeed shot on 35mm, so yeah, you’ll get more detail when they get around to transferring that to Blu-Ray.

Obviously you mean the exterior shots in Yes Minister and old Doctor Who (new Who is shot entirely on SD video) which were 16mm. But it’s not just about resolution, that old 16mm footage was also quite grainy and noisy. It’s never going to look great.

Another part of the problem is the aspect ratio of old TV shows - they’re mostly 4:3 - so you have the exact opposite of the problem that used to exist with transferring movies to home entertainment video.

Forgive my ignorance, but I thought “resolution” just means “number of dots.” How can you apply the word to analog images?

Film emulsion consists of grains of silver nitrate or other photosensitive chemical - these grains have a finite size and that size affects the maximum resolution of the film. It’s not a grid of rectangular pixels like you see in a digital image, but it’s resolution nonetheless.

ETA: Smaller grain sizes in film emulsions result in greater effective resolution (subject to factors such as lens quality, focus, depth of field, exposure, etc), but smaller grains make the film less light-sensitive. So there’s always a compromise between grain size and film speed, with photochemical films - and this is especially pertinent when you are committed to exposing a fixed number of frames per second.

As a rule of thumb for scanning 35mm still camera slides, I have always used 2400 DPI as the maximum practical resolution, giving a standard frame 2400 x 3600 pixels. Higher numbers aren’t likely to provide much significant data improvement and have a high cost in data storage requirements.

Of course, if detecting grain is important, you could go higher, but some film stock may not support even those numbers, especially color stock.

When converting from analog to digital, one needs to know how much resolution is enough to capture the desired detail. More than that is wasted, less will result in loss of some image information. Depending on the application, somewhere in the middle is a compromise.

The remastered Star Trek series that’s been released sporadically the last year or so is going to be available in 16x9 HD because of the ability to rescan and reframe the original film stock. It can then also be digitally cleaned and a new negative made from it.

I see no reason why other similarly filmed shows, such as perhaps Mission Impossible, or Get Smart (not sure if those were filmed the same way or not) might not also get the same treatment.

That’s certainly what it’s most commonly used for in these days of ubiquitous computing, but more generally it’s a measure of how well an optical system can resolve details. So for example if I drew two dots on a piece of paper, then moved it slowly away from you, the distance at which you stopped being able to distinguish the two dots would provide a crude measure of your eyes’ resolution (more precisely, the angle subtended by the dots to your eye would be your eye’s resolution).

This angle is affected by a number of different things. Firstly, the density of receptors in your eye (this corresponds to the grain density on film, or the CCD resolution of a digital camera). This is fairly obvious; if you’ve only got four working receptors, you’re going to have much more trouble distinguishing things than if you’ve got several million. Other things have an effect as well, however.

Even if you had an infinitely dense retina, there would still be a limit on the objects you could resolve because the lens of your eye has a resolution limit, too. This in turn depends on a number of things - your lens might not be perfectly shaped, for example. But even with a perfect lens there’s still a limit, related to the width of the aperture you’re looking through (in the eye’s case, your iris). Light diffraction from the edges of the aperture causes incoming images to spread out somewhat like ripples. If this effect is pronounced enough then objects’ images start to overlap, meaning that no matter how good your lens and receptors, you’ll never be able to distinguish them. The smaller the aperture, the worse the effect, which is one of the reasons why tiny lenses such as you find on camera phones will always produce terrible quality images no matter how much “resolution” is claimed in the spec.

So yeah, like gaffa says, it’s all a lot more complicated than just quoting the nominal resolution of the capture medium, be it film or CCD or what have you. That said, all of the lens resolution limitations apply just as much to digital capture as they do to film, so it’s not entirely unreasonable to compare idealised film resolutions with nominal CCD resolutions since neither is the full picture (no pun intended, but I’ll take the credit :)).

AKA “Tilt’n’scan” tilting up and down to recompose the picture, as opposed to panning left and right to recompose a wide-screen picture for 4:3. Not sure one is any better than the other.

For sitcoms and other programs with fixed sets, they will use a variation of the virtual set technique. The areas on each side of the 4:3 image either have been or will be seen earlier or later in the program. They can use those images to build digital “set extensions” to tack onto the left and right of the image.

Wha…?

Why don’t they just have the program showing in a 4:3 rectangle in the middle of the HD screen, with black stripes to either side? (Or maybe, for this infovorous age, pop up trivia or something?) Why the need to extend the picture? Adding in bits of the set on either side seems like a mistake. The program was made with a particular aspect ratio in mind. Why change that?

-FrL-

For the same reason as films are colorized; monophonic sound is made into stereo; laugh tracks are added; extras, outtakes, bloopers and commentaries are added to DVDs. It enhances the experience. Or so some people think – don’t get me started.