Converting from one format to another (my experience is primarily converting to FLV format with Adobe Media Encoder CS4) takes a relatively gargantuan amount of time. I occasionally see a trivia note on IMDB that such-and-such a scene took x-processing years to do. What’s going on under the hood? What makes it so processor and/or memory intensive?
As for tweaks, is there anything I can do/set on a PC to optimize the process? If it makes a difference, I have an i7 920, 6GB RAM, and two ATI HD 5700s (Crossfire). More RAM? Some setting in the ATI control panel?
It’s a shitload of data. Converting from one format to another usually involves expanding a compressed file to individual frames, re-encoding and re-compressing.
As far as tweaks, there are many, but they are probably specific to the files, hardware and software you are using, so I doubt if I could help you. In general, disk writes are a bottleneck, so the more you can minimize those, the better. Maybe a RAM disk?
Is there a good non-specialist overview of the process out there? Something to satiate the need-to-know-how-something-works area of the brain without expecting me to roll up my sleeves and become fluent.
How do players differ from encoders (is that the right term)? If players can expand in real-time (or appear to expand in real-time), why does this factor into the overall time to change format? And if recorders (e.g. cameras) can encode in real-time (or appear to), how do these add up to such a dramatic effect?
Note that I’m not doubting the explanation–I’m asking these to expose more and more of my ignorance on the subject.
In terms of speeding things up, my boot and application drive is an SSD. I don’t like to put data on it (256GB isn’t exactly roomy), but if it will speed things up I can certainly allocate some temporary space. Or would a RAM disk be a better idea.
Oh, and RAM disk? I don’t think I’ve used a RAM disk since I was on a DOS machine. Funky.
One big reason encoding takes so long is so that decoding can occur in realtime. All the hard work is done at the encoding stage, when there is plenty of time, so the decoding engine doesn’t have to be very powerful or very fast. For example, one technique used to compress video substantially is to use motion estimation. This requires the encoder to look at sequential frames and determine which areas are moving and which are static, and then encode the motion changes from the key frame. Clearly, this is a lot more processor-intensive than the decoding, which just needs to move the pixels around.
Generally, encoding is processor-bound, not I/O bound, so doing everything on a RAM disk isn’t going to help very much. A faster processor with more cache will help, as will more cores (if your compressor is well threaded).
In this case, it’s not video conversion. The CPU times mentioned on IMDB and such are rendering times. This is when a movie such as Up or Toy Story is drawn, frame by frame, by computers. For each frame, the computer must determine how the various objects are placed, how each hair on the dog’s back is reacting based on the “wind” and on its recent movements, and then each pixel has to be given the correct colour and intensity it should have for a semi-reflective, golden object that is lit thusly and is sitting next to that checkered red-and-white ball. Repeat 150000-200000 times for a 2-hour movie. And of course the whole process is done repeatedly until it “looks” right. At least once in HD.
Of course, they spread the work among banks of computers, so the “years” are reduced to weeks or days.
Dedicated devices, like camcorders, have an onboard chip that does nothing but encode to one or two specific formats. I know in the past you could buy encoder cards for PCs that did the same, but a CPU is general purpose, so cannot necessarily decode/re-encode in real time.
Is that an absurdly suped-up (souped-up?) version of what my system does with games? In the games it goes quickly but it’s only accounting for a comparatively limited number of variables.
If so, is there any comparison of a modern system (say the one in my first reply) to hardware used to render CGI effects in the 80s or 90s? In, say, fifteen more years of advancing technology, any chance that Crysis XII will look as clean as Toy Story? Or is the scale of distributed computing Pixar uses so vast that a single desktop won’t reach such power for the foreseeable future?
Not really. The professional graphics cards have just more generic processing power available that is only slightly more adapted to non-gaming usage. For speeding up video encoding you need a dedicated encoder card, such as: http://www.matrox.com/video/en/products/mac/compresshd/
If we didn’t have compression, processing would probably go faster, but the amount of data that would have to be read/written would be much more. There is a tradeoff here.
The encoding process takes raw, frame-by-frame, discrete data, one-pixel-at-a-time data, and, using very sophisticated algorithms, decides how it can be best compressed. Example: if only a tiny portion of a frame is different from the following one, only the data that is different needs to be saved; the data that is the same can be repeated. This provides an enormous amount of compression for very little loss as long as you have talking heads; not so much for sporting events.
More importantly, the formats encoded by a camera are usually relatively simple ones like MJPEG. These formats aren’t computationally intensive, but result in much larger file sizes. Your camera would be lucky to get a 1:20 compression ratio over the raw video, whereas a “proper” video format such as Theora or MPEG can do 1:50 or better.
Right, there’s a trade-off between time taken to encode, time taken to decode, and file size. Generally, you want the decoding to be fast enough that most devices can display it in real time, so you have a hard limit there. Given that hard limit, you want the file sizes to be as small as possible, so people don’t get bored waiting for them to download (and ideally, real-time over a realistic internet connection). So that’s another thing you’re optimizing. The only place left to do the tradeoffs is in the time taken to encode it more cleverly, and since that only has to be done once, as opposed to millions of times for the downloading and decoding, it’s acceptable to spend a really long time on that step.
In my experience the term “RAM disk” means a virtual disk created by using part of the system RAM (or main memory, or whatever you want to call it) as a storage disk. It’s much faster to write to RAM than it is to write to a physical hard drive, so for something where you’re writing a lot of data, a RAM disk would be handy. And some systems now have enough RAM (this computer has 24GB and the servers next to me have a couple hundred gigs total) to give you enough room to do something large like video editing.
A solid-state disk is completely different, being basically just another hard disk attached to the system, but which uses flash RAM instead of spinning magnetic platters. Much faster to read from than a magnetic hard drive with its moving parts, but writing is much slower than reading, and the total number of write cycles is limited.
I would imagine that another part of the complexity concerns the humongous number of pixels they’re working with. YouTube stuff, I imagine, is somewhere between 600400 pixels and 1280800 pixels. If Hollywood tried that on a movie theater screen it would look horrendous. Anyone know how many pixels they typically work in?
:rolleyes: As far as a RAM disk vs. SSD, all I’m saying is writing to some kind of non-moving memory is faster than writing to a moving platter. I’m aware of the differences, but as beowulff pointed out, video processing is more CPU-bound than I/O -bound. Still, when I am processing video, I see an awful lot of disk I/Os that I sure would like to reduce.
In any case, I’ve never gotten write speed on a SSD to be as high as my hard drive. But I generally buy the cheapest SSDs I can find, which may affect things. Also, the real performance hit comes when copying many small files. Presumably a big video-editing write would not suffer from this problem (although I guess it depends on the specifics of how it writes).
A real RAM disk would be great for saving a video conversion’s output, with the small problem of losing the results if/when the computer shuts off.
I haven’t run any speed tests recently, but that’s surprising. However, I think flash is slower to write to than RAM. And you could be witnessing the driver software’s flaws. ETA: Or maybe the hard drive has more buffering and masks the actual write time?
I used to write low-level (assembly) software, using RAM to simulate a hard drive, many moons ago, and there is an art to efficient code. Poor code can overwhelm any speeds inherent in the system.