Difference in Film Types: Live vs Still

Before I begin I just want to say that in my 5+ years of surfing the net, I have never come across a greater collection of minds and opinions on a myriad of topics. I discovered this site 3 days ago and have spent the better part of those 3 days absorbing as much of this forum as I can.
So, my first question is for any resident media students/film buffs/production crews or whomever can tell me the difference between the two types of film used in shooting television shows. Obviously I don’t know the terminology, so I will do my best to try and explain what I mean by the two types.

One type (From here on out referred to as “A”) appears to be much more “live” than the other (lets call it “B”). By that I mean A looks sharper, clearer and more fluid than B. Examples of shows shot with A type of film are Home Improvement, Roseanne, Fresh Prince of Bel-Air, Saved by the Bell, Golden Girls, Alf, Growing Pains, Who’s the Boss and basically any sporting event or live television. Im sorry that I can’t list any current shows, mainly because I dont watch as much TV as I used to and also it seems that more shows nowdays are shot with type B film.

Examples of type B shows are Friends, Boston Public, 7th Heaven, Everybody Loves Raymond, Cheers, That 70’s Show, Coach, and Frasier.

I hope that is a large enough sample size to give someone who can answer me an idea of what I am talking about. I honestly don’t know if it is indeed a different type of film that accounts for the difference or if it is something that happens in post production.

Not an expert by anymeans, but some shows are filmed and some are videotaped.

Maybe that’s the difference.

Essentially, yes, that’s the difference. Look at the cameras on David Letterman or Monday Night Football–those aren’t film cameras, they’re video cameras. Same thing with current situation comedies (except for those like Malcolm in the Middle with no studio audience). The Academy of Television Arts & Sciences distinguishes these shows as single-camera (film) and multi-camera (video).

It is similar to taking a photo with a digital camera or scanning a photo. Scanned photos look very different.

That makes sense (filmed vs videotaped) but then why does the end of the Cosby show (which looks like Type A camera work) say, "The Cosby Show is filmed before a live, studio audience.

Also, it would seem that filming a show would cost more than videotaping it, so why do sitcoms and such film them?

The primary difference between the two is the frame-refresh rate.

Film, as seen in theaters, uses- as I recall- 48 frames per second (actually only 24, but two shots of each) while videotape refreshes at closer to 60 frames per second without the duplication.

Even at 60/sec, your mind can tell the difference- both images appear to move fluidly, but the video seems more “real”.

Note there are exceptions- in the movie Natural Born Killers, Tarantino altered the frame rate of certain sections to give it a “video tape” look. Obviously the actual refresh rate of the projectors wasn’t altered, so he most likely took out the redundant images from the usual 24/sec. (Making for 48 ‘new’ images, with no duplication, per second.)

Plus video can have more or less whatever artifical refresh rate you want, so video can be shot to ‘look’ like film if so desired.

>> Film, as seen in theaters, uses- as I recall- 48 frames per second (actually only 24, but two shots of each)

I have never heard this. Can you show any evidence?

>> while videotape refreshes at closer to 60 frames per second without the duplication.

Video is interlaced and refreshes half frame every 1/60th second or one full frame every 1/30th second.

As I pointed out, the same effect can be noticed between digital and scanned photos so I think refresh rate has nothing to do with it.

Easy Sailor, just Google “film frame refresh rate” and take your pick.

One selection from the first page of results:
http://broadcastengineering.com/ar/broadcasting_format_conversion/

Not quite true. I’ve been to TV tapings where they use film in a three-camera setup. Memorable as being film because they yell out “checking the gates” when they finally finish a scene (something to do with making sure there’s no foreign objects on the area where the film passes). I believe almost all sitcoms now use film, in part because it will make the stuff re-runnable when everyone has HDTV.

It seems like most shows that are shot on videotape are edited in real-time. That may be the deciding factor of which medium to use.

The difference is getting very slim to nonexistent. If one stays within resolution limits there are a host of consumer and semi-pro cameras that can make photos indistinguishable from film.

Video is headed in that direction but there are a host of other reasons why NTSC video often has a very different look than film, particularly with motion. One of the biggest reasons is the interlacing of consecutive fields in a frame but there is software available to give a “film” look to video that matches pricey progressive scan cameras.

I think you are missing my point. It is not a matter of resolution. A scanned photo looks different from an original digital photo. They are easy to tell apart. Not better, not worse but different.

Doc Nickel, I think you might have meant something different than what I interpreted. I took “actually only 24, but two shots of each” to mean the same frame was shot twice. In other words, every couple of frames on the film were identical. I guess that’s not what you meant.

Some odds and ends about film vs. video.

The first sitcom to be shot with multiple film cameras was I Love Lucy (1951). Videotape did not exist at the time.
The first successful videotape system for broadcast television was introduced by Ampex in 1956. Color videotape was introduced in 1958.
The first sitcom to be shot on videotape was All in the Family (1971).
The first few episodes of the sitcom Newhart (1982) were shot on three-camera videotape, then the producers switched to three-camera film.
Many cinematographers believe that film is more flattering to actors’ faces than videotape.
Many British television shows of the 1960s and 1970s (e.g. Monty Python’s Flying Circus) used videotape for their studio work, and film for their location work, intercutting between the two.
Although all of the series Sports Night (1998) was shot on film, scenes that took place in the television studio were shot with three cameras, while scenes outside the studio used one camera.
Actors on U.S. shows that are primarily videotaped belong to the union AFTRA (American Federation of Television and Radio Artists), while actors on U.S. shows that are primarily filmed belong to the union SAG (Screen Actors Guild). Most actors belong to both unions.

Do you mean Oliver Stone? I was under the impression that Tarantino’s only contribution to NBK was the rough outline of a story, and he was in fact quite displeased with what Stone did to it.

Former projectionists checking in to confirm what is probably no longer in doubt:
The shutter on the projector has two blades (at least ours did); as it spins through its cycle, the audience gets two peeks at each frame. Hence the 24fps actual film rate.

It’s kind of neat to turn the motor by hand and watch all of the sprockets and so forth working in concert, complete with two flashes of light per sprocket. We hand rotate it a bit every time we thread it to make sure that the mechanism hadn’t been stopped mid-frame when we threaded it.

(Leaving area of professional knowledge now)

On to another source of film vs video difference: I think the last aspect of film that will be conquered by digital or video is color. People look just fine in my digital camera, but if they have on an unusual shade of purple, my camera can’t reproduce it faithfully. Plenty of other colors turn out odd as well.

The mechanism is easy to observe: Point an infrared remote control at any video/digital camera and press a button. You can see the flashing of the IR remote on the LCD as white light. The CCD’s sensitivity doesn’t match our eyes.
I can only imagine that there are lots of parts of the spectrum that get handled way differently between CCD and film and therefore produce the “Videotape” look.

Sorry, only one here.

Video lacks contrast. It just doesn’t have the depth of film.

To answer the OP, the shows look different because some are shot on film and some are shot on video.

To echo many previous posts, yes, the difference you are noticing is the difference between using video or using film.

Don’t pay too much attention to brief voice-overs, when they say the show ‘is taped’ or ‘is filmed’ before a live studio audience. They might use these terms correctly, but just as often they don’t bother to make any literal distinction between ‘taped’ and ‘filmed’. They’re only trying to tell you that the audience laughter is real as opposed to faked.

As to why the two media look different, there are hosts of technical factors involved. Many have to do with chroma and luminance, or how either system preserves colour and brighteness levels. For example, one major difference is that of ‘contrast ratio’. The contrast ratio is the ratio between the brightest and darkest parts of an image that a particular medium, such as film or tape, can accommodate. Video has a much higher contrast ratio than film. So if you have a shot of a bright sun and a tree in silhouette, the video camera can preserve the brightness of the sun and the solid black of the tree much more faithfully than the film camera, which will either have to darken the sun a little or render the ‘black’ silhouette of the tree as ‘not quite so black’, and you end up with maybe a slightly washed-out greyish colour instead of a true, deep black.

This has adverse consequences for movies featuring lots of special effects. As you know, lots of special effects shots involve ‘compositing’ or ‘matting’ several different elements together so that they all look like part of the same scene (say a real actor and background model of a monster). When they shoot this on film, they try to adjust all the colour values so that the different elements look like they were all in the same place at the same time and the illusion is convincing. Unfortunately, when the movie is transferred to videotape, the medium can distinguish many more fine levels of difference between, for example, different levels of black, and it becomes painfully obvious that the foreground element are matted on top of the background elements.

I was always taught that in film, there are 24 fps, and between each frame was a blank, “black” frame. True? Not true?

No. Each frame appears exactly once, with no intervening blank frames.
Here’s an example of the shutter that shows each frame twice.

Here’s an example of the frames in a film. I searched high and low for a good image of 35mm frames with no success. This image of 16mm will have to do. Rest assured that 35mm does not have extra frames or double frames either.

Here’s a description of film formats. (side note: Check out the section on scope. It’s one of the most common formats and it’s kind of cool how the image is squished on the frame.)

I’m not talking about resolution either apart from not exceeding practical limits when making prints.

I’m sure you can come up with examples where that is true but what are you comparing? A consmer flatbed scanner against a consumer digicam or an oil drum scanner to a DLSR? It isn’t valid to compare a $400 consumer video camera to 35mm cine film and the same applies to digicams.

Modern digicams are going in the direction of larger CCDs. Even with the same pixel count this makes for dramatically better images. The “size” of a pixel in a bitmap may be an abstraction but the physically larger photosite on the CCD will gather more photons when compard to a smaller CCD with a lens having the same angle of view and relative aperture. This results in better signal to noise ratio, more dynamic range and better control of color and contrast. We are reaching the point where it’s difficult for professionals to tell the difference between an image from a 35mm film camera and a 6 megapixel DLSR. The next generation of 10-14mp DLSRs might erase the difference altogether.