You are absolutely right about the shaky camera. I too have just watched Hancock and on a 52" TV where the standard format film has been blown up to fill the screen it makes it just about unwatchable. Actually makes me close my eyes to keep from falling out of my chair and/or making me sick.
There is no need for this. It makes an otherwise good movie unwatchable.
I watch movies from a couple of years ago when shakey cam was at it’s peak, and I wonder how we’ll be able to watch those 10 years from now when (God willing) shakey cam is once again used judiciously.
Mangetout, do you have a copy of the video pre-deshaking, somewhere, for comparison? I was able to catch some of the artifacts, but there were others I’m not sure if they were a result of the process, or in there already.
If you wanted Matt Damon to beat the shit out of some guy in a long shot in few takes, you’d need the scene choregraphed and rehearsed for days upon weeks of practice.
But if you’re splicing together 120 half-second shots of ShakeVision, there’s no need for the actors to learn an entire fight sequence or, for many of the mini-takes, exert themselves or do anything that requires a lot of rehearsal to avoid anyone getting hurt. You can create the sppearance of speed and ferocity with editing and by preventing the audience from getting a true view of what’s going on.
I’ll see if I can post it later - the cleaned-up footage is still far from perfect - some of the edge of frame artifacts are down to the reconstruction of missing parts of the picture (shows as a sort of watery distortion at the edges of some frames).
There’s also a sort of warping effect that is caused by the (virtual)rolling shutter in the camera - if the camera is in mid-shake as the imaging chip is sequentially read - this is not fully corrected by the software.
But you should (and hopefully will) see the original footage - it was so shaky as to be completely unwatchable - it literally made me feel queasy to look at.
CyberLink PowerDirector will do it quite well. You lose a bit of the edges, but while both of my HD video editing packages (Vegas Platinum) offer this capability, Vegas crops the snot out of it and PowerDirector does a really nice job steading the footage without losing too much. I’m really impressed with it. (Though I overall prefer Vegas). I’m sure many of the other video editing programs (Premier, others), have this capability as well. Not sure if you want to rip films, steady them and recreate discs though. Sounds like a giant pain to me, but I don’t have too much of a problem with the unsteadycam in most situations.
Last year a colleague showed me footage of very very long running shots along The Presidio, that lovely park up against the bay there near the bridge in San Francisco. Chrissy Field, etc. It was shot with a Steadicam.
The work was flawless. I don’t mean it was nicely operated Steadicam. I mean it was beyond any running shot I’ve ever seen. I was floored. Humbled. Seriously.
Then my colleague informed me that software had been used to take out any sway or shifting. At first I thought ah, what b.s. Then I thought about it and I figured, if this is the ultimate goal- to move through space emulating the human experience without any errant moves or swaying at all, then this kind of software package is a godsend not a crutch.
I’ve gotten used to it. It no longer jumps out at me. I just take the images as they come. If these two movies hadn’t been mentioned, I never would have remembered that they used this effect.
OK, I think you’ll like this - the sample video is here:
Original footage, top left
The video at the top left is the full frame of the original shaky footage - shot on high zoom (distance about 200 metres, across Lulworth cove) from a handheld camera,
Processed footage - bottom right
The video bottom right is the result of processing using Deshaker - note that this isn’t just cropping out a stable frame from the middle of the wobbly video - well, it sort of is, but it’s reconstructing the edges so as to create a plausible full frame again, using parts of other frames in the video.
Until I uploaded it to YouTube, I didn’t realise that there are already quite a lot of other comparison videos up there - and many of them are also quite surprising. It’s a very clever bit of software.
Wow, that is good. It’s still a little dodgy, but considering how much of an improvement there is overall that’s forgivable. And it will only get better at it as new versions come out.
As the resolution of image sensors keeps increasing and the cost of storage and processing power keeps decreasing, very soon it will be possible to shoot a scene hand-held at 8k with a huge amount of shake then crop/rescale it to extract a perfect 4k image from it. Same thing with HDR imagery allowing the scene to be lit after it is shot.
I had a technological idea filmmaking once, and this might be the place to share it.
When I do visual effects using CGI graphics, I sometimes need to employ a depth map, that is a greyscale image of a still that contains the distance information from the camera. For example, close things are white, and distant things are black, and everything in between is varying degrees of grey.
This information can add dimension to a still image, allowing other layers to intersect with it in the Z-plane so a new layer can be placed behind an object without need for rotoscoping; or can apply digital depth-of-field blur; or add fog to simulate atmospheric effects; and a whole lot more.
Unfortunately it’s almost exclusively for CGI graphics rendering, and we can’t create depth maps of any accuracy for live footage.
But what if we had two lasers set up in the same way two cameras are used for 3D movies, recording every frame in half-resolution as a depth map? If we had depth map information with every 2D shot, we could:
[ul]
[li]Post-produce a 2D film into a 3D film with relative ease[/li][li]Place layers of visual effects anywhere, at any depth, within the shot without need for rotoscoping or greenscreen[/li][li]Extend a camera track with accurate perspective shift[/li][li]Probably a ton more things I haven’t considered[/li][/ul]
Would that be technologically possible, do you think?