Can older movies be "high definition?"

This seems to contradict most other posts here. Actually, your description seems to contradict the fact that you answered “yes” to the OP. Are you saying that watching Casablanca on Blu-Ray is better quality than seeing it on film? On a pristine print?

Doesn’t hi-def have to be filmed with a hi-def camera? Or am I thinking that is for digital only.

Yes, and for all intents and purposes, film movie cameras are hi-def. The resolution of the film is greater than current digital HD.

HD video is hi-def for video, but in the general scheme of things it’s remarkably low-def. 1080 means 1920x1080, or 2.1 megapixels. An iPad with Retina display is 2048x1536 (3.1 MP). Most DSLRs have upwards of 10 MP; my Canon 20D is 3504x2336 (8.2 MP).

Although film itself has a resolution greater than HD, any print of older films that you see projected will generally be a third generation (or later) transfer, a copy of a copy of a copy. Each copy slightly degrades the picture, but this is necessary for film distribution.

A Blu-ray release of a marquee title like Casablanca will take its transfer from original sources, using 4k scans, matching contrast (or color) shot by shot, and removing imperfections frame by frame. There will be no generational losses and any print problems will be fixed. They invest a lot of money in cleaning up the print as they will see the return in Blu-ray sales.

Many years ago, I had seen Casablanca screened in a small theater for a college film class, and have also seen the current Blu-ray version. I have no trouble believing that the Blu-ray is superior to the distribution prints.

I don’t know how relevant this still is, but I remember 20 or so years ago when digital cameras started to really take off I read that for digital to equal 35mm film quality it needed to be at least 15 megapixels (at least for still cameras). We have since achieved this, but other factors like compression make a direct comparison more complex. But suffice it to say that anything shot on 35mm film (which encompasses most all of filmed TV) can equal HD video quality.

This I do know: Anything made from the 70s on, i.e. anything that was shot on SD (standard definition) videotape like *I, Claudius *or sitcoms like All in the Family, WKRP, The Jeffersons, Good Times etc. cannot ever be transferred to HD because they were all shot on SD video. The source material is 30fps NTSC videotape at 240 lines resolution interlaced (fake 480 lines). That’s it. There is absolutely no higher resolution source material available.

TV shows that were filmed were shot on 35mm film stock. Shows like Mary Tyler Moore, Bob Newhart, Cheers, The Odd Couple, MASH* etc. and most all hour-long dramas can be transferred to HD & Blu-ray because a 35mm film negative is inherently at least 720p resolution (HD). Note that I said the film negative is. Any current DVD releases of these shows were rarely scanned from the camera negatives because it wasn’t necessary, just a decent print will exceed DVD resolution. For one of these shows to be released in true Blu-ray HD it must be rescanned from a low generation, high quality film original (preferably the negative). Films routinely get this treatment but old TV shows are another matter. Rescanning the negative and rendering a true HD Blu-ray copy is expensive, so it has to be a TV show that is expected to sell extremely well in order to make a profit.

In terms of old shows videotaped in SD like I, Claudius (which was a great show BTW) there is one factor that can be ‘tweaked’ but with very mixed results. The show can be converted from 30fps NTSC analog video to 24fps digital video. The change to 24fps gives it what can best be called a ‘film look’ effect. In fact, there was a company formed in the 90s that did just this to standard def video, converted it to 24fps, and was/is actually called Filmlook™. The problem is more often than not it made it look less like actual film and more like oddly strobing videotape. Anything shot on SD video that Netflix streams suffers from this effect (MST3K for example)…

Not true, just to keep the facts straight. NTSC resolution has 480 scan lines. Technically 525 scan lines of which 40 or so are used for the vertical blanking interval and not visible, broken up into alternating odd/even line interlaced display to prevent CRT flicker. So NTSC resolution is technically 480i60, 480-line resolution at an effective rate of 30 (actually 29.97) full frames per second. VHS home recorders reduced the scan lines to about half of that because that was the limit of analog tape technology of the day, but studio videotape recorders and DVD transfers certainly did not.

Do yourself a favor and check out Lawrence of Arabia on Blu-Ray.

It is worth pointing out that 35mm still camera format is not the same as 35mm movie format. Although the actual film stock is the same, the frame orientation and size is different. A still camera frame is 36x24mm. A movie (Academy format) frame is 21.95 x 18.6mm. For useful intents, the area of film available is one half that of a still image.

What makes things even more messy, is that in order to get the various widescreen formats, anamorphic lenses are used to stretch the width. So the horizontal resolution is less than the vertical, in fact, about half.

Being a film enthusiast from days gone by, I used to shoot Fuji Velvia on 645 format. I could reasonably scan them at about 24M pixels, and get down to the grain. You could scan them at higher resolution, but one would be kidding oneself that there was much additional information to be had. That would equate to a useful 3M pixels on the same film stock at 35mm movie format. Which is better than 1080p (aka 2k) but not by a huge margin. A pristine negative will have better intrinsic resolution, so scanning at 4k (8M pixels) is probably about the limit to what is reasonable.

70mm film format is a big step up - 52.5 x 23mm in camera. One could reasonably scan a pristine negative of this at 8k (about 32M pixels) resolution. Imax is bigger still at 70x48.5mm and gets silly.

Scanning at 8k is slow and expensive. It takes the best part of a week to scan a movie. But for high grade Bu-Ray releases it is done, and only in final authoring is the resolution reduced. It makes for a quite stunning result. Doing it this avoids a lot of annoying processing aretfacts creeping in, and results in a velvet smooth image.

The part about not touching 35mm simply isn’t true. Digital sensors of the same size easily outperform any 35mm film in terms of dynamic range, sharpness, signal to noise ratios (an iso 12800 image on a modern FF DSLR will look significantly less grainy than ISO 800 film, and that’s ignoring advanced noise reduction techniques). It’s odd that you should pick Velvia in your example, since Velvia was not known for sharpness and it has a horrible dynamic range. It’s primarily known for its color saturation, an effect that any digital processor can control completely. There are even software packs that almost perfect simulate the rendering of dozens of types of film if you want a certain look.

Even digital APS-C (24x18) exceeds the quality of 35mm film.

As for comparing it to 4x5 slide film, I really don’t know, but I suspect some of the digital medium format cameras can do it. I ran into a guy with a 60mp phase one technical camera recently, and the detail was pretty amazing.

True but in terms of analog NTSC: scanning lines are not the same as lines of resolution. In fact it’s clarity was usually measured by its *vertical *resolution. In old test patterns you’ll see a graph of vertical lines converging upwards. The point were they’re no longer distinct measures that display’s vertical resolution. Because it’s analog, even though there are exactly 525 horizontal lines, there is no set number of pixels per line to definitively determine vertical resolution. It varied by screen size, broadcast source, VHS vs Beta etc. In fact, with old analog video they are more like ‘blobs’ than actual pixels…

You’re not making sense. Scan lines are the vertical resolution, and since they’re discrete, they have a close parallel to digital pixel resolution. Again, there are 480 of them in an NTSC frame, not 240 as you stated, and that’s what I was correcting. There’s a damn good reason that the MPEG-2 DVD standard uses 480 pixels of vertical resolution for NTSC, and 576 for PAL.

The horizontal resolution in an analog system is a bit more murky, but if one assumes it’s roughly the same as the vertical, that yields a pixel resolution for an NTSC image of 640x480. The MPEG-2 DVD standard was extended to 720 (horizontal) x 480 to accommodate the higher horizontal resolution demands of anamorphic widescreen. Likewise PAL DVD is 720x576.

Ok, you’re right, it is the horizontal resolution. But it is completely separate from the fixed number of horizontal scan lines. Like I said, it refers to the resolution (or dots per inch) of each horizontal line, which varied. Look at this test pattern. The horizontal resolution is measured by the ability to distinctly render converging vertical and semi-vertical lines, IOW the horizontal resolution. But calling analog NTSC 640x480 is only an estimate. Although DVDs are digital, unless you have an up-converting player connected to an HD TV with an HDMI cable, you’re not going to get a definitive horizontal resolution. Coax, S-Video and RGB component cables are all still analog.

The horizontal resolution of a scanning analog picture is a bit messy to define. Ultimately it is governed by the bandwidth of the signal. More bandwidth, and the faster the signal can vary, and thus it can provide tighter contrast between changes in brightness.

This is for instance where composite video differs from S-video. Composite video crams both luminance and chroma into one signal, and compromises both, but in particular the chroma signal has less bandwidth than the luminance. (This is an important issue - it turns out that visually you can get away with this, so long as the luminance defines the edges well, the colour can be a bit blurry- rather like a pen and wash painting.) S-Video separate the two, and gets a bit more bandwidth, especially in the chroma. Component video separates things further, into three, and has much higher intrinsic bandwidth, being able to handle HD resolutions.

The master video recordings will have much better bandwidth than domestic VCRs could manage, and much higher than broadcast signals. So, they can reasonably get full DVD (720) pixels. However, there is no simple direct translation from one to another. Analog signals are often described by their ability to separate two lines (hence the “lines per xxx” metric). This is a function of bandwidth. Naively you would expect to need 2 pixels per line, but the equivalence turns out to work out as about 1.6 pixels per line, for a picture of about the same apparent resolution. It gets further complicated because the two systems are prone to different sorts of artefacts, so it is hard to compare where one is having problems and the other isn’t.