Years ago slow mo was determined by frame rate.
Is digitally recorded footage infinitely able to be slowed?
No, you still need a finite amount of time to capture an image.
However, the fastest digital cameras can do tens of thousands (or more) of images/second.
Also, any given recording still has a framerate. Just because you can record tens of thousands of images a second doesn’t mean it’s always (or even often) done, and if it was recorded at 24 frames a second, then that’s what you have.
Among other reasons why it isn’t usually done, the size of the files is proportional to the framerate. So a five-second clip at 10,000 FPS would be 400 times the size of the same clip at 25 FPS.
That’s assuming no compression is applied. Since many high-speed images vary little from frame to frame, and frame-to-frame compression is one technique, I would imagine a high rate of compression could be used in some cases.
But such compression algorithms are usually not fast enough to operate 10000 times a second. It’s probably cheaper solution to get more memory than to beef up the hardware to the point where it can do realtime compression of that much data.
Compression is slow, so you can take images faster than you can compress and store them. This is not an issue at 25 fps but is at high fps rates. Generally, burst mode is limited by available DRAM for temporary storage of frames before compression to longterm storage.
Two other factors influence the rate a digital device can operate. The first is the data transfer rate off the image chip. It used to be row by row using a bucket brigade, which was not fast. The other limiting factor is the reset time. On good cameras a mechanical shutter is used to cover the CCD to allow it to blank out. This is the best form of reset for the chip. Eventually, mechanical shutters are too slow, so an electronic reset is used. This introduces noise to the image, and takes a finite amount of time. Of course, many cheap digital and cellphone cameras only use digital shutters, for cost and complexity reasons.
However, the ability to have high frame rates and storage allows innovation in camera tech. Many cameras now allow pre-images, maybe taking 5 frames a second while the shutter is halfpressed. Then, when the shutter is fully pressed, you have images before the press, catching the thing you just missed because you were too slow. You thow away lots of extra images, but you do get the good ones.
If matter and energy are analog to begin with, wouldn’t digital representations of them, however slow and detailed, have to be approximations?
I have no idea what this means.
Matter and energy are both quantized, ie digital.
Even if they were not, any representation of *anything *is an approximation: analog, digital, abstract, whatever. That is what a representation means. If it were not an approximation then it would not *be *representation, it would be a duplicate.
Really? I’m sorry, I was wrong. I don’t even know what “quantized” means.
(I meant my reply to be more of a question than an answer, but I didn’t make that very clear. In any case I would love an explanation, if you’re willing to provide one.)
What I meant – or tried to ask – is how can you digitize an analog waveform perfectly?
I take it from your reply that there’s no way to do this even via analog methods?
And I further take it that there are fundamental physical forces, perhaps quantum mechanical ones, that I’m completely not understanding?
No need to apologise, the question was just confusing.
In really, really simple terms, “quantised” means that matter and energy can only exist in a finite number of distinct states. IOW a single “peice” of matter can either exists at point A or at point B, it can’t exist in between those points. The same applies to energy. Any piece of matter either emits light at a certain frequency and amplitude, or it does not. There aren’t any half-steps at which ‘weaker’ light can be emitted at a lower amplitude or lower frequency.
We perceive the real world as analog because it is composed of billions upon billions of individual peices. The net interactions of all those quantum events can be approximated well by the analog model in our brain, and the anlog model is much faster than trying to analyse all the individual digital change moment to moment, but it remains an approximation of the very real and very digital world. To see this at play in an unambiguous setting, just watch a digital video display. Although it is unambiguously digital, your eyes and brain filter it to produce an analog model that gives you all the necessary information at a fraction of the processing power that would be needed to decode the digital changes.
You can’t, because the analog waveform only exists as the result of the interaction of all the billion upon billions of tiny digital/quantum events that are occurring every second to produce it. All the vibrations in whatever is generating the waveform, all the chemical interactions that produced the wave to begin with, and everything that those events interacted with, including the listening/recording device.The only way that you could digitize *perfectly *is to recreate all those events, which means literally recreating the entire universe from the very beginning.
Anything less than restarting creation can not digitise the waveform perfectly. It must always have some deviation from the original. Oh, and the real kicker. Because of the nature of quantum levels events, you can never record any event perfectly anyway, so you will never have any way of knowing if you have recreated it perfectly anyway.
What you can do is to digitise the waveform to a level that is better than your ability to detect the difference between it and the original. That is relatively simple, and the practical difference between that and recreating the waveform is non-existent. But we know that it isn’t a perfect digital recreation.
Nope, for exactly the same reasons. Once again, you can easily create a digital copy of the waveform that is beyind your ability to distinguish from the original. But we know that it isn’t a perfect digital recreation.
There are, but even without those, you couldn’t actually recreate anything as complex as a waveform. The mathematics if a bit beyond me, but even calculus doesn’t recreate the entire waveform at every possible level. It only seeks to recreate it at the level at which you observe it by breaking it into a a series of step changes. You can’t use calculus to simultaneously look at a waveform at both its largest and smallest scales.
On a more practical level, you can’t produce a perfect analogue recording of even a pure tone because a) the very act of recording dampens the sound and prevents echos/interfernce that should have occurred if the device were not there and b) no analogue microphone, regardless of how well built, can convert all the resonance into energy; there has to be an energy loss. That means that the tone that the microphone transfers to your recording device can never be *exactly *the same as the tone that entered the tone that was in the air prior to interacting with the mic.
No recording of any has ever been perfect. Undetectably close, sure, but not perfect.
In theory, because reality is quantised digital recordings should be more accurate than analogue if they were slow and detailed enough. In practice the nature of quantum events means that either technique breaks down entirely at about the point where the events start interfering with each other and themselves.
The bottom line is that the concepts are captured in information theory, which goes back to Shannon. The critical issue is that in the real world everything has some noise inherent in it. Whether sound, light, images, audio - measures of anything at all. There is noise. This is a natural part of the universe. Next, you have some measure of signal. An individual image, or any other measure can be characterised by the level of signal you get. Audio it is pressure, light, you can get as far down as counting photons, most things we do are converted to the electronic domain before we measure them, so you get Volts, Amperes, or Coloumbs as typical things you measure. So you have a signal, and you have a basic noise level. The noise is measured in the same units as the signal, and this gets you a ratio of signal to noise. That ratio exactly provides a measure of how much information you have. There is no such thing as an infinite accuracy measure, and never infinite information. Digital cameras are significantly noise limited when the light levels are low. Some of this noise comes from quantisation of light, but for most cameras the noise is inherent in the (analog domain) amplifiers used to read out the sensors. The sensitivity of the sensor is increased (the effective ASA rating turned up) by increasing the gain in the readout amplifiers. This appears as snow like noise on the image. The more signal you have (ie the more photons hitting the sensor) the more dominant the signal, and the better the signal to noise ratio, and the more information you can obtain in the image.
If you are gathering images, or other samples over time you have a rate of information gathering. Shannon showed (as essentially an application of Fourier theory) how the rate of information transfer is exactly limited by the bandwidth and the signal to noise ratio of a channel. It matter not what you are measuring, the theory works for everything. Even with an analog channel you can exactly calculate the maximum possible information rate. It does not need to ever be digitised, but you still get an information rate than can (and often is) expressed as a bit rate. (The term bit was coined by one of Shannon’s colleges and introduced to the world in Shannon’s seminal paper on signalling - it was not invented to describe a unit of storage in a computer.)
The bottom line is one of that there is no such thing as a perfect measure of something. Not in the universe we inhabit. Everything has noise, and everything has a finite amount of information inherent in it. The duality of the manner in which signal to noise and information exist in both the analog and digital domains is often not appreciated. The manner in which noise in a digital image looks a lot like grain in a high speed film image is not a coincidence, but part of this duality.
Wavelengths of free particles are not quantized: So far as any physicist is able to tell, you can have any wavelength you want at all. And even things which are quantized, are usually not quantized in a manner which is amenable to digitization.
Just my take, I think the OP is referring to the fairly recent novelty of live sports now having perfectly smooth slow motion playback. While this is a side benefit of switching to all-digital cameras/feeds it still has to be specifically implemented. IOW not every show that is shot on modern digital video would be capable of that smooth slo-mo playback, the cameras & equipment still have to be designed and set for a high frame rate (which sports venues’ typically are).
In the field of processing digital video slow motion can be either totally fudged, or at least aided by interpolation of images. Motion detection - either on a per pixel basis, or on small (say 8 x 8) blocks allows smooth motion to be created in the interpolating images. So from that point of view, the OP’s question about infinite slowdown is answered as a very cautious “yes.” What is important is that there is no additional information available. The video is simply smooth (or at least smoother) on playback.
There are high speed video cameras used for sports as well. I remember the German F1 Grand Prix telecasts of at least ten years ago featured at least one. Lovely, razor sharp, slow motion video of the cars - usually set up on a tight corner where the drivers were working the kerbs, and you could see the tyres distort and suspension working as the cars slammed through the corner. But I suspect it was a one off capability - only one corner, and they had to treat the feed as an action replay to be inserted in the coverage after the fact.
This outlines the difference between a true high speed slow motion (overcranked - where the initial frame rate is faster than on relay, and every frame contains new information) versus interpolated (where the initial frame rate is the same as on replay, and there are new frames synthesised to pad out the rate for replay.) True high speed video captures all sorts of unexpected dynamics our eye, or a standard rate video cannot. Interpolated slow motion cannot synthesise these - it simply makes what we have captured look smooth.
Of course the interpolating algorithms can only do so much, and they are unable to properly distinguish between different objects in the images, so they can, and will, incorrectly predict image motion, and you will get artefacts in the images. Tearing of complex background components is common. And weird motion flow where what should be a static background flows or ripples. A football match (or at least what passes for football in the US) against a solid green plastic grass background will probably work well most of the time, but get the crowd in focus in the background, and things will get messy.