Once for the hell of it I found an old videotape from 1987 that my parents had taped of soap operas and my friends and I watched it. My God, the people on those shows looked absolutely HAGGARD. All the men looked like raging alcoholics with craggy, prematurely wrinkled faces and crow’s feet around their eyes and looked like they couldn’t wait for the cameras to stop rolling so they could grab their bottle of Scotch. And the women - they all looked like streetwalkers, with caked-on makeup and overdone hair and the early stages of alcoholism, cocaine abuse and premature aging showing through the whole facade. All of them looked and sounded absolutely miserable, like being on the soap opera was some kind of degrading act. It was unbelievable to watch, the cheapness and tawdriness of it all, including the sets, the lighting, the camera angles, etc; it was like a bad high school play, but with tragically over-aged actors.
And, Jesus Christ, the commercials were so different back then too.
Yes, this is a big part of it; they all look like they’re lit with banks of fluorescent lights. This was also just kind of the lighting style of the 60s and 70s. Look at episodes of major network dramas from that era. They were filmed on film, not video, but they all have that same over-lighted look. Check out an episode of Mission: Impossible. Even Roots–check out the scenes supposedly filmed in slave cabins: everyone has 7 or 8 distinct shadows.
*Scrubs *is shot on film, but some of JD’s fantasies are purposely shot on videotape.
In the very first episode, he has a fantasy of him and Elliott being married, complete with a laughtrack and the different look.
In the episode, My Life In Four Cameras, the show switches from its usual format to one where JD imagines what the hospital would be like as a sitcom, and the difference in the whole ‘look’ of the show changes. The difference is quite glaring.
One of the differences between video and film is the contrast ratio. In simple terms, this is the range that the medium can accommodate between the brightest part of the image and the lowest. It also pertains to how faithfully the medium can record information about those high and low points and all shades in between.
Film has a much higher contrast ratio than video. When film is well used, and you have a well-lit scene and a good cinematographer, the film stock can faithfully record the brightest and darkest parts of the image, and all the fine shades in between. Video has a lower contrast ratio. If you aim a video camera at a scene that contains both very bright and very dark elements, the tape will tend to favour one or the other - so you either get the brighter parts faithfully represented but the darker parts all crushed together, or vice-versa. The video tape will also tend to preserve fewer shades of light and dark in between these extremes, so that everything is represented by, say, a dozen intermediary shades or ‘bands’ of luminosity as opposed to fifty on film (simplified example).
This does give rise to problems when some films are inexpertly transferred to video, for example when being prepared to be shown on TV. Many films of the Star Wars era used black composite mattes to layer one element (such as a spaceship) on top of another (such as a planet background). See the scene on film, at the movie theatre, and it looks great. See it on TV, and you can see the matte lines surrounding one or more of the elements. This is because some fine gradations of shade are lost in the process, and some ‘clipping’ occurs - the background element gets rendered at one level of luminosity and the foreground element gets rendered at a different level, so the difference is suddenly noticeable.
This isn’t as much of a problem now, because the technology has advanced all round and people in the industry have become smarter at eliminating the problem.
I seem to remember that Newhart was on video in its first season and on film starting from the second (or some subsequent) season, presumably as it became apparent that the show would survive and merited a bigger budget. Or maybe it was some different type of video format? It looked a lot less cheap, anyway.
And call me crazy, but haven’t there been some sitcoms that started on film and then transitioned to videotape? I’m thinking specifically of Happy Days–though I’ve never figured out if the show was even supposed to be a sitcom in its early days.
I think the sitcoms that are done in front of a studio audience are taped, and the ones that are not (Scrubs, My Name Is Earl, The Office, for example) are filmed. IIRC, the first season of *Happy Days *was not filmed in front of an audience. The later seasons have that videotape look, with the harsh lighting.
I am learning so much…thank you all. I know that sounds a little corny but I don’t really know what questions to ask. Merely reading what you guys have written is interesting.
Regarding the difference between 1970s British and US TV shows, part of the reason is that there were no affordable portable video cameras at the time, so film was the only option on location. The BBC used economical 16mm film cameras for that, but because it was relatively low quality, and because they already had the facilities, they would use higher quality video cameras in the studio.
American shows were often made by film studios, who were naturally set up for film in the studio as well as on location. And I believe that they often used higher quality 35mm too. So for them, there was no need for those jarring British-style transitions between film and video. US shows of the time such as Taxi, Mork & Mindy, Cheers could be shot entirely on film at decent quality.
It’s funny how it has come full circle - in the early '80s, portable video cameras became available, and suddenly many British productions were shot entirely on video, on set and on location - things like Auf Wiedersehen Pet and later episodes of Tales of the Unexpected. And now, they still shoot on videotape because it’s more economical, but go to great lengths in post-processing to make it look like film.
Personally I miss those all-video productions of the’80s. In the right hands there could be a sort of dreamy hyper-real feel to it that you don’t get from film.
The best show that used the mix of video and film was The Larry Sander’s Show. It was shot mostly on film but when they would be taping the show they switched between video and film. So when the show was being viewed “live” you would see it on video. Then when they would cut to commercial everything went back to film.
I think this is not the case. A friend and I went to a “taping” of Two and a Half Men" a couple of years ago, and they shoot on film, with video assist – meaning (as I understand it) a smaller video camera is attached to the film camera and tracks and zooms with it so that the director, production crew, and audience can see it on the monitors, but the final product is film. (I’m happy to be corrected on any points where I’m wrong.)
As pointed out earlier, “Newhart” was on film for much of its run. I believe “Everybody Loves Raymond” was shot on film as well. In fact, according to this link, most all network sitcoms are shot on tape. It’s an interesting take on whether or not they should go back to video.
Depth of field is a lens issue and relates to how much of a given scene is in focus. The lens brings an image to a point inside the camera called the image plane, and it doesn’t care if there’s a CCD or a piece of film there. The main control over depth of field is the lens aperture - close the iris down to something small like f/22, and pretty much everything in front of the lens will be in focus. Open it up to f/4, and the depth of field goes down to a couple of inches.
As for that adapter - interesting gizmo that almost looks like it’s in search of a problem to solve. It just lets you use old manual-focus 35mm SLR lenses on a motion picture camera. Apparently, its main goal in life is to let someone with a collection of old lenses use them instead of spending thousands and thousands of dollars on new lenses.
Not quite. Depth of field is affected by the lens aperture and the physical size of the subject projected on the imaging medium. Since a 35mm frame is larger than the CCD in most video cameras, the projected image is larger and the depth of field is less.
The adapters have a mount for a lens and a ground glass the same size as a 35mm frame. The lens projects onto the ground glass, allowing 35mm-like DoF effects, and the video camera records the image from the ground glass.
I think framerate is a significant factor in the soap opera effect. Motion looks much smoother, an effect that I see duplicated if I turn on motion enhancement on my HDTV. The TV adds interpolated frames to 24 fps source material and, to me has the side effect of making film look way too smooth and like a soap opera.
Soaps are show with multiple cameras. For example they will have cameras on both actors during a conversation, and in addition may have a third which shows both of them. This is done to avoid having multiple takes for each scene. Because of this they need to set up the lighting differently; it’s a compromise so that all camera positions have decent lighting. In contrast, a movie or TV show shot with a single camera can optimise lighting for each camera position.
Thanks. I hadn’t looked too deeply into how they work yet, just that they do.
Last year I worked on two short movies, both shot on the same HD video camera, but one with and one without one of the adapters in question. The movie that used the adapter has a much more professional look than the one without.