Questions About Videotaped v. Filmed Sitcoms

OK Arabella Flynn, I have read your posts, though not in the depths which they deserve. I am very curious about you and am wondering if you would be willing to talk a bit about yourself? I am pretty sure you are British, as you do not have the North American bias toward film as a capture medium (a bias which has disappeared in the last 5 years). I also guess that you work in the entertainment industry in post production, and have done so for some time. (15+ years?). I am very respectful of privacy, but am wondering if you would be willing to describe the work you do, and how long you have been doing it for. If you would prefer not to discuss it I completely respe t that. Thank you for your very informative posts.

Is it just me, or do filmed sitcoms look a lot worse than filmed shows of other types? It just looks less clear, an episode of Big Bang THeory or Mad About You vs. say, an episode of Lost or 24. I preferred the videotaped sitcoms because they had sort of a live feel and you could see everything more clearly.

I feel I may be missing the point of this thread, because you seem to answer your own question - group A were shot on videotape, group B on film [edit - or processed to look like film]. The difference is well known - frame rate being the most important factor, IMO. In the case of Roseanne, the credit sequence is film, and the show itself is videotape.
It is slightly complicated by the fact that nowadays interlaced material is quite often “filmized” - processed to make it look like film. They have even been able to do this with live footage for some years now. Personally, I think it’s overdone, and hopefully is just a passing fad. There’s nothing wrong with the video look.

I’m 30, American (though I read and watch a lot of British stuff), and work in IT despite having a degree in sociology. I read the Sherlock Holmes stories when I was about ten. Failing to realize that real people weren’t supposed to be able to learn to do the magic detective-y thing, I did. I do it to everything. The sociology of media and technology just happens to be something I have been cramming into my head now for years.

My policy is generally that I’ll take a stab at answering any question anyone pitches at me, with the caveat that I have absolutely no paper credentials in 99% of the things I spout off about. Personal questions usually included, barring the usual physical address and credit card number stuff. :slight_smile:

Sampiro’s question is a good one, though – learning to tell how something was shot and what kinds of conversions it’s been through from looking at the picture is a handy skill to have in media archiving, and it’s really difficult to explain sometimes what exactly you’re seeing. I can do it pretty well, but most people reflexively edit out the characteristics of the medium and focus on the program content, and have no idea there are any differences at all.

The difference Sampiro is talking about is clearly the difference between frame rates, of 30i (60 images per second) and 24 with 3/2 pulldown. The pulldown is the process of taking 24 images and spreading them over 60. Each video frame has 2 fields. The odd numbered lines are drawn first (the A field) and then the even numbered lines are drawn (the B field). The action continues during the process, so each frame is actually 2 different images blended together. So even though video is 30 fps, you are actually seeing 60 images per second compared to film’s 24.

When film is transferred to video, the film frames are spread across the video fields in such a way that those 24 frames can be stretched to fill 30 video frames. So where A B C D are four film frames the video frames will be constructed like this - AA BB BC CD DD. So when something shot on film is showed on TV (in standard def at least) not only are you seeing 24 images per second instead of 60, but every other image is slightly longer.

Shooting film gave better image quality in resolution, exposure, the ability to use shallow depth of field, etc but was also more expensive so it was used for the bigger budget shows. It also holds up better than video shows of the 80s and earlier since the quality of the video cameras was so much poorer, especially before the advent of digital sensors.

Since video and 30fps was used for Soaps, news, and sitcoms that feel of 60 images per second gets associated with cheapness. Peter Jackson shot the Hobbit at 48fps and screened 10 minutes of it for industry types, and the reaction of many people was that it made it look like video, and what should have been epic fantasy came off a little cheesy. High frame rates feel more like real life since that is what video is most often used to capture, but when it comes to make-believe, realism isn’t necessarily something you want.

For a sitcom, you might go with video if you want your viewer to feel like they are right there with the studio audience watching a play. For a show like Roseanne where the viewer would strongly identify with the characters, they might want that kind of immediacy even though they had the budget for film. Something like Seinfeld which is less realistic and more absurd is better served by the distance that film creates.

I sometimes wonder if it would be the other way round if the “senior” format of film had had the higher frame rate and live video or videotape were 24fps. Would we expect dramatic or non-realistic material to have the higher frame rate, and associate the lower frame rate with “real life”, or is there some fundamental perceptual reason for it being the way it is?

Watching 70s Britcoms like Monty Python & Fawlty Towers as a kid is what first made me keenly aware of the differences between film vs video. Those two shows used both in every episode, interior studio scenes were on videotape but all exterior scenes were filmed. Because Fawlty Towers was a traditional sitcom (not non-sequitur zanyness like Python) but it still often cut quickly back & forth between both in one scene it was even more jarring. Another good show for spotting the two together was HBO’s great The Larry Sanders Show. When you were supposed to be seeing the real (i.e. ‘fake’) talk show it was thru a studio video camera, but all the behind the scenes shots were on film (and usually shot with a hand held camera).

In that latest Python documentary Terry Jones says that the network was going to erase all the original Python shows just to reuse the tapes. He bought them from the BBC for like £900 (a lot of money in the early 70s), stored them in his attic, and pretty much saved Python from obscurity by eventually allowing it to be sold to PBS and shown in America!

This was standard operating procedure for quite a while. Studio-bound video cameras were of sufficiently high quality to do the original capture from the early 1960s onwards, but it was not until the 1980s that portable video rigs were practical for location shooting. Hence the phrase “film at eleven” – prior to camcorders that were small enough to be braced on the shoulder (with or without the actual recorder part being in a luggable box with a shoulder strap), the only way to get location footage was to use a film camera. News would go out at 6pm, but the film wouldn’t be developed and ready for broadcast until the story was repeated later that night.

Even after that, film was extensively used for sequences that needed effects applied during shooting or in post-production. A lot of the background plates for chromakey inserts were shot on film and played back on a kinescope, with the foreground (i.e., the stuff in front of the bluescreen) captured on video, and the two signals combined in the studio with a mixing board. Film was also used for sequences that were meant to be re-framed or zoomed later – from about the mid-70s forward, it was possible to use a vision mixer to crop/frame/zoom on video footage, but the drop in quality was very noticeable. Film has much higher resolution than the vintage vidicon cameras, and could be transferred or composited several times before the picture quality dropped below acceptable for broadcast.