What is the difference between videotape and film?

It’s not that the physical tape itself has deteriated but that video recording technology was much worse in the past.

I’ve often wondered why they did that. I was thinking of posting it as a GQ.

I can’t verify the technical accuracy of this, but I can definitely say that whatever they did worked. You could always tell in an instant when the “scene” shifted from “on air” to “off air.”

God, I loved that show.

Hail Ants…Help me fight (my) ignorance.
I was told that the shifts from video to film on Monty Python were less noticable when viewed as they were ment to be viewed on British TV.
Reason?
European TV runs at 25 fps (something to do with electricity there being at 50 cycles vs. North America’s 60 cycles) and sound film in Europe was also shot at 25 fps.
Is there any truth to this assertion?

I vaguely recall hearing that Hill Street Blues used to “grind” the film between plates of glass to give it an even “granier” or grittier look.

Anybody know if this is true or not?

The difference between quality is quickly shrinking. A lot of the difference in feel between film and video is that people who chose video chose it for the reason that other people didn’t like it. They liked the harsh sharpness of the picture. Other people saw that and assumed that that was the nature of video(and up until recently it probably was). But when Lucas shot a scene for episode I in HD, and decided it was what he wanted to shoot all of Episode II in, people started to review HD. 24p elimanates of the harshness, and naturally gives a feel much closer to film. As cinematographers learn to master the medium with mist and FX filters and lighting in general to perfect the feel they want, along with technology continuing to advance, I think we are in the last stages of film as a dominant medium for commercial moviemaking. Especially as corporate beancounters start to realize that a $90,000 HD camera is comparable to a $600,000 35mm camera. Not to mention the production cost differences others have mentioned.

Doug Bowe

Our mains frequency is 50Hz and so our frame rate on tv are related at 25Hz.

One day we may get HDTV with its much higher refresh rates(this is because it has to be digital to get the bandwidth - similar to a monitor) but I ain’t holding my breath.

KneadToKnow:

To be honest I can’t verify it either. I’m not in the industry or anything, I’m just an enthusiast who watched the show. But I would bet my life on it.

It doesn’t seem that easy to most people but for me, distinguishing between video and film is almost as easy as between B&W and color. I look at the footage and its just blatantly obvious (barring any technical fiddling like FilmLook).

Another example of a ‘dual-shot’ show is Bob Newhart’s Newhart, the 80s series set in a bed & breakfast. The first season of the show was shot on video and looks amazingly cheap compared to the filmed version.

Also the UK show Dr. Who (the Tom Baker one anyway) used the video indoors/film outdoors method.

So did Hitchhiker’s Guide to the Galaxy, though they didn’t do much outdoor work.

Isn’t the demarkation between film and vidoe is extremely blurred these days? Especially with all the computerised special effects that are inserted into many movies now. These effects require film shots to be digitised, manipulated, edited and streeamed back onto film as analogue data.

In a way, it’s like the debate that occured amongst audiophiles when CD’s were first launched onto the popular market. There were many “golden ears” who claimed that they could hear the difference, but what they forgot was that many of their analogue sound sources, such as LP’s, were in fact recorded and produced in digital form before being converted to analogue for publication.

What I am suggesting is a similar argument re the OP. How would you know just what bits of a block buster like “Pearl Harbor” were shot on film, compared to what has been digitally enhanced along the way?

Well, when people ask “Film or videotape?” they want to know what was the original medium, not the final product.

As for not being able to tell if you’re looking at original film or digital when at the theater, that’s kind of the whole point. They don’t want you to notice. It’s only when you see something that simply could not have been done live that you can confidently say “Hey, that was CGI!” like the blowing up of the Arizona.

One of the best examples of the subtle use of CGI is in Erin Brockovich. The film opens with Erin’s car wreck. They show Julia Roberts get into her car and drive off and hit another car in a single, uncut take. Unless Julia’s learned stunt driving on her days off, it must’ve been some kind of CGI effect, wherein they substitute her car for a digital replica just before the crash. Or they digitally splice together two strips of film, one with Julia driving the car and one with stunt drivers doing the wreck. And there is a credit for Visual Effects at the end of the film.

BTW: A brand-new series that uses both video and film is the ABC series The Beast, a drama about an MSNBC-like news channel. (The conceit is that the behind-the-camera scenes are on live 24/7, either on cable or on the Web.) All behind-the-camera scenes are shot on film, all on-camera stuff is done on video.

Many of the things that have been said already about “texture” and “depth” and “dynamic range” and the general look and feel of videotape vs. film are true because of fundamental differences in the way that the two recording media respond to photons.

Light interacts much differently with film emulsion than with the silicon chips inside video cameras. I will attempt to give the details in everyday language without corrupting the truth… but I’m sure if I screw up somone will correct me.

I work with these silicon chips every day. They’re called Charge Coupled Devices, or CCDs for short. Professional video cameras all use three of these, one each for Red, Green and Blue. Each one is an array of electron buckets that collect electrons generated by the light from the scene as it is absorbed in the silicon CCD. After waiting to collect enough electrons (typically 1/60th of a second for a video field) each bucket is read out serially to form a string of numbers that can be arranged in order again to reconstruct the scene, one image for each color. These monochrome images are then combined to make a full-color image.

In film, a photosensitive chemical that has been deposited in microscopic “grains” on a plastic substrate is exposed to the scene. As the light strikes these grains, the chemical properties of each grain is changed to reflect the intensity of the color light to which it is sensitive at the place on the scene where the grain sits. Color film uses three different chemicals, each sensitive to a different color of light. Once exposed, the film is developed by bathing it in a solution that causes the grains that have been altered to change color, and then “fixed” in another solution that renders the emulsion insensitive.

These are fundamentally different processes at the point where the information from the scene stops being light and starts being something else… a recording. This results in differences in their responses to detail, color and intensity.

Intensity first; it’s the easiest. Silicon CCDs respond linearly to light intensity. This means that in a properly lit scene, each photon that comes in creates the same amount of charge, and so is counted equally. But emulsions don’t work that way. Their response is logarithmic - the first few photons that hit a grain cause much more of a change in the emulsion than the last few.

As a result, the average effect of a single photon in a darkly-lit grain is much greater than the average effect of a photon on a well-lit grain, and also makes it a bit harder to saturate a grain. Thus, film possesses a much, much better dynamic range – the ability to capture details in shadows and bright lights. A part of a scene that would look completely dark or washed out on video would still contain detail on film. This also gives directors a lot more artistic option and control over how a scene looks, and makes feasible a lot more sources of natural lighting. Using this dynamic range allows a director to gives an impression of depth to a scene, or communicate mood.

Cartooniverse touched on this when he described how difficult video shoots are to light. Since details in bright areas and dark areas are lost, scenes must be more uniformly lit when shot on video. And this frequently makes a scene look “flat,” or “cold.” Mood lighting on video is tough, and effects like backlighting and lens flares are difficult, if not impossible, to pull off aesthetically.

Color: In emulsions, the color you see is a property of the emulsion. The emulsion is sensitive to a range of colors, but those are not necessarily the exact same colors the emulsion will take on when developed. A good example of this are the old specialty cinema stocks of the Sixties and Seventies (the names elude me now… Technicolor, etc. ???). These had some not so subtle departures in color registration that made for interesting aesthetic effects, like enhancing a starlet’s blue eyes or red lipstick. And these emulsion tricks are still used today, although much more subtly. Film stocks are chosen for the kind of lighting and the mood the director wishes to evoke. Go take a look at Three Kings and notice how different the hues and saturation are from something like Saving Private Ryan for instance.

Now in video cameras, the light is filtered before it reaches the CCD. There’s one CCD with an appropriate filter for blue light, and one each for Red and Green, also. But once the light is past the filter, the CCD really doesn’t care what color it is. And once it becomes digitized, that information is just a number - it has no inherent color. The editor can make that blue channel represent any color he wants on screen. (Of course, an editor could do that with film after its digitized, too, but the emulsion has already done it’s job “interpreting” color. Digitized film still carries the influence of the emulsion.)

Another important difference in color is that videotape only uses three numbers to represent color. This limits the number of different colors that can be represented. On a CIE color map, which is sort of a rounded triangle shape, RGB or any other three-number color scheme cannot cover the entire map. Some of the extreme colors, like deep reddish and purplish blacks, just cannot be represented. They cannot even be represented with digitized film. The only way to reproduce these colors is to use an analog medium from recording to screening. (Or to change the entire video industry to a four-color format!)

Finally: detail. The standard NTSC broadcast format is equivalent to 460x480 pixels. VHS, DVD, and other formats have slightly different resolutions, but essentially the same. This makes for a total of about 0.25 million pixels. On film, where the number of grains on a frame varies according to their size (which determines the speed of the film, e.g. its sensitivity to light), there can be as many as 10 million individual grains per frame, or more. (This assumes a 10 micron grain size on a 32 mm square frame. I couldn’t find any manufacturer numbers on grain counts.)

So, there’s 40 times as much information recorded on film? No, not exactly. Film is a lot noisier, and grains are radomly shaped and randomly sized (within a range) and randomly arranged. This creates a lot of random noise that requires oversampling to eliminate. But then, sometimes this random noise is desired, and is used for effect. It’s called “graininess.” And so a much faster film with larger grains is selected.

But video isn’t immune from noise, no indeed. In fact, video is susceptible to worse noise problems than film. Film’s noise is random, and therefore easier to hide and more tolerable. Video noise is usually filled with lots of contrast and jagged edges that seldom have any aesthetic appeal, and can be impossible to hide.

Have you ever seen someone wearing a small-print plaid jacket on the news? Notice how the pattern goes wild on your screen? And gyrates crazily when the person moves or the camera pans or zooms? That’s called aliasing. It happens because the pixels on a CCD camera are arranged in a perfectly rectangular array. If you attempt to use such an array to image another rectangular array, the recording you get will not necessarily be what you expected. The mathematics are a bit beyond the scope of my article here… just take it on faith. I can say that the same math is responsible for making wagon wheels look like they’re spinning backwards, except in that case, it’s because you’re sampling regular intervals of time, not space.

This same aliasing is also responsible for “video crawl” - ever notice how sometimes the vertical edges of things on screen have a sort of crawling marquee appearance? Same deal. And you don’t see it on film. In fact, when film is sampled for digital editing and videotape, special processing is included to prevent aliasing from ruining the advantage of usnig film in the first place.

The only time I can remember aliasing being used on purpose is on the original Star Trek. Lt. Uhura had a bizzare, spidery looking screen at her console that rotated and produced hypnotic patterns. These were Moire patterns created by two rotating filters with different pitch screens. The potential usefulness of ailiasing as a cinematography tool are very few. Usually they’re totally ugly.

I’m not certain, but I suspect that some video cameras intentionally oversample to help minimize the instances of aliasing.

One more interesting note: a colleague of mine worked at a place where she was designing a cinematic CCD chip for 16/9 format cameras that the parent company wanted to sell to the studios exploring digital cinema. They were well along in the design, and were testing the readout speeds when they found out their chip was TOO good. They had assumed that a higher resolution was better, so they made their chip something like 4000 x 2250 pixels: 9 million pixels! But the studios they tried to sell this camera to weren’t interested. The studios’ market research had determined that the average movie audience couldn’t distinguish anything much better than 1024 x 768 pixels, and so they didn’t want to burden themselves with all the extra memory and processing power necessary to handle 9 megapixels, when 1 megapixel would do. So my friend’s cinema video chip was sidelined.

Ugh! When I heard this, I stopped looking forward to digital cinema, and began dreading it.

And don’t even get me started on compression. This ain’t the Pit – I might get in trouble.

Ugh - I just previewed this… Longest. Post. Ever!