Which special effects CAN'T be done?

Damn Chronos.

I bow to your greater knowledge. So is there a language for the subtle errors of CGI? I’m thinking of something like Foley artists have, for example, in describing a sound as “too bright”, etc. No-one criticises their work any more (in the same way CGI is criticised, I mean), despite the inherent artificiality of dubbing in all non-voice noises. Yet criticisms of specific CGI failures (to my ears, at least) don’t seem to have that level of specificity. They seem more like unhelpful football commentating which makes vague demands on a team (“They gotta get more field position, Bob! They gotta play more as a team!”) which doesn’t really translate into anything useful that individual players on the field could act on.

You may noty be aware of it, but the John Waters movie Polyester and the Rugrats movie Rugrats go Wild! , both of which used scratch and sniff cards, were really low tech and latecomers to the world of cinemastic smell. There was a far more sophisticated system tried out in 1960 called (I kid you not) Smell-o-Vision. It was introduced by Mike Todd and only one movie was released using it – Scent of Mystery. But an entire cinema was fitted to release scents triggered by a release recording:

http://web.uflib.ufl.edu/spec/belknap/exhibit2002/smell.htm

Evidently the Japanese have been giving this a go recently:

http://www.contactmusic.com/new/xmlfeed.nsf/mndwebpages/new%20world%20of%20smellovision_13_04_2006

Another thing with non-holographic 3D: focus. Your eye focuses and defocuses depending on how deep you’re looking. With 3D movies, the the focus is limited to what the camera is focusing on when the images were taken. Even with the best 3D technology available at the moment, conventional filming/CGI cannot get over this.

Along the same lines, Vonnegut’s Tralfamadorians are IIRC described as perceiving every moment of time simultaneously so they see objects in every relative position they ever did or will occupy. No way could this be represented on-screen.

I think we can agree that any sense not available to human beinmgs away from the theater (higher dimensions, Tralfamadoran/Godlike sensing of time. Real perception of UV or IR (not shifted into visible), lateral line sensation, telepathy, and damned near anything else you can think of) won’t be accurately put on screen. And if it was, who would know it?

Not that you can’t try to produce the experience. False color has been used for ITR vision for years now. I think a creative filmmaker could do a helluva job with the telepathic parties described by Alfred Bester in The DFemolished Man – heck, look how good a job Bester did with only the printed page to work with. Imagine being able to use animation and morphing and split-screen and maybe 3D to depict that party scene.

There are a few terms for specific kinds of failure; one that springs to mind is the underwater effect - when CGI objects move in a way that is unrealistically smooth or fluid* - trouble is that CGI is such an active field of development that as soon as any such general failure is pointed out, everybody will be hard at work to resolve it - often succeeding before terminology to describe it becomes popular.

Except that Uncanny Valley thing, which is certainly alive and well.

This has been my opinion of a lot of CGI in films. CGI characters always seem to move in a way that suggests to me that they don’t have any mass. A lot of the CGI in the Lord of the Rings movies particularly struck me that way, even though most people seemed to think the CGI characters were amazing. The way Gollum moved, and even moreso the way the cave troll in the first movie moved, simply looked cartoony, like they didn’t have any mass. I’ve never been sure exactly what causes that, but it’s something I notice a lot.

A few years back I dabbled around a bit with 3dS Max and other 3D programs. AFAIK there isn’t so much a language of “errors” as there is a language to describe the properties of light and how it behaves. Probably similar to the language used in professional photography.

For example, one thing that dramatically increased the realism of CGI is the ability to render “diffused lighting”. Basically this is the light that bounces around off other objects in the room. Here’s an example. As you can see, it has the effect of adding subtle highlights and shadows that add to the realism. It takes a long time and a lot of processing power for this effect though so you don’t see it sometimes in low budget movies. That’s why crappy CGI looks flat and unrealistic.

Even good CGI often looks unrealistic up close because of suble differences of lighting. If an object is rendered outside of a live scene, it may look a little off because color from what should be nearby objects isn’t casting light or color on the rendered object. You don’t know why it looks odd, it just does.

Also movement is another giveaway. CGI looks great from a distance but when you are up close and the fine details aren’t moving like they should, your brain registers it as something odd. That’s why soft, complex and irregular things like people and animals are very hard to render realistically. You can fill a scene with CGI cars and you won’t know the difference, but a CGI person still stands out as weird.

Human faces and expressions, as already noted. We’re getting there, but by degrees. It’s easier to make a human-like expression on a non-human face and make it believeable, but we’re not quite there with real humans.

Voices. This is a limitation more about money than anything else — it’s invariably cheaper to pay someone to read a line with the correct emotional inflection than to design technology to accurately replicate a real actor’s voice.

Fire. I don’t care what advances have been made, it still doesn’t look like real flame.

Destructive transformation of objects. Melting, burning, disintegrating, eroding, shifting, and exploding objects are extremely limited at the moment to things which can be modeled by polygons and polyhedrons. But we still can’t easily do something as simple as water changing dirt to mud, or a fire slowly turning a log to charcoal. It’s much easier to film the real thing than to replicate it in a computer.

Amalgamating particulates into semi-solid structures. Sure, there’s the Mummy sand effect, but how about a guy digging a hole, where the dirt piles up in a heap?

So does it count as a ‘special effect’ if its pretty much done for real? The only completely realistic, up-close depictions of weightlessness I’ve seen were the scenes in Apollo 13 which were made by actually filming the actors during ‘free fall’ in a plane (the Vomit Comet). In all other movies where the weightlessness is simulated by green screen effects and/or wires, a person’s hair and clothes still appear to be hanging as they would in a gravitational field.

Don’t know if this is a “special effects” problem, but I’ve never seen a movie with a realistic dream sequence – “realistic” meaning bearing any resemblance to actual human dreams. Movie dream sequences, even the most hallucinatory, take place in a world with a physical reality for the most part very similar to that of waking life, where laws of logic and causation still apply to a certain extent, and where things rarely just happen for no apparent reason.

A realistic recreation of a scene or environment that is both impossible to build convincingly on a backlot and unlikely to successfully pitch a big-budget motion picture. Example: the 1939 New York World’s Fair.

CGI of turbulent fluid flows will continue to look unrealistic for a long time to come. They just can’t model enough interracting particles to make rapids, waves and whirlpools look right.

Something that will probably never be possible: different members of the audience watching the screen and each seeing themselves as the protagonist. Anything where different members of the audience look at the screen and see something substantially different from one another. The Holodeck.

Daniel

Interesting point. But I guess it’s because when you go to the trouble of depicting a dream in a movie, the content of the dream has to be driven by the plot (the dream is a prophecy, or an acting-out of psychological conflict, as in the dancing dreams in Fred Astaire movies). The randomness and meaninglessness of real dreams kind of conflicts with that imperative.

But it would be fascinating to see someone attempt to capture the quality of a real dream. For mine, a good literary effort is Alice in Wonderland. Alice sees something normal (eg, a white rabbit) then notices a bizarre detail (his pocketwatch) first, and then backfills the rest of the rabbit’s appearance around that detail (the wasitcoat which is a logical consequence of the pocket watch, etc). This trope is repeated for many of the characters, capturing (for me, at least) something of the way real dreams grow and develop from trivia, not from a sense of overarching narrative direction.

If the trailer for **Snakes on a Plane ** is any indication, realistic looking snakes are impossible to do with CGI.

I would think realistic eyes would be easier than they seem to be. All the CGI guys need to do is add a transparent and reflective layer over the eyeball (the tear layer) and they’d improve believabilty but nobody seems to want to bother and CGI characters go around with dead eyes.

I notice it a lot too. Actually, this is what tipped me off to Kakurenbo being full CGI. Cell-shading looks a lot like traditional cell animation, but there’s always that one scene that doesn’t quite look right.