All artificial images are made out of still images such as single man-made images like drawings, painting and photographs, comics and sketches/multiple drawings are multiple man-made still images in one image,but there’s no such thing as a real artificial continuous moving images, not even in this day and age, some websites claim the term movies, TV programs and videos ‘‘moving images’’ as if they were one continuous entity, but in actuality, they’re consecutive still images in rapid succession, it’s different than real life motion because they’re made out of man-made still images, what better term and category should be for movies, TV broadcasts and videos since they’re made out of still images that are one after the other, should it be ‘‘successive still images’’ or ‘‘multiple consecutive still images’’, what should be a better term for these types of images?
[Moderating]
First of all, welcome to the SDMB. The Factual Questions forum is for questions with a factual answer, which this really isn’t. It could go in IMHO, our forum for opinions, or Cafe Society, our forum for the arts. I’ll move it to Cafe Society for you.
Second, the best term would clearly be “moving pictures”, because that’s what they look like, and what such things look like is the sum entirety of what’s relevant about them.
This begs the question - how does the brain process real motion, is it qualitatively different from a time series of still snapshots?
Or, “Moving Images.”
Once you get faster than about 15 to 20 frames per second, your brain can’t really tell the difference and just processes it all as “motion”. Similarly, if two sounds occur less than about 1/10th to 1/20th of a second apart, they appear to be simultaneous to your brain.
There are some visual artifacts in slower frame rates that your eyes, nerves, and brain can pick up on, which is why artificially generated images like video games don’t seem to fully smooth out until you get into much higher frame rates (about 80-ish fps). But even with video games it looks like “motion” to your brain at around 15 to 20 fps.
But my question was more about what the brain is doing when it see real life motion. Is it doing something qualitatively different than taking a time series of snapshots?
I read somewhere long ago (sorry, no cite) that if the refresh rate of the images is high enough, the process is the same for sequencial still images and “real” images moving (with the caveat of those artifacts in movies like wheels that seem to run backwards when the stagecoach in the Far West movies changes speed). They found out because the refresher rate varies from one animal species to another: I believe to remember that dragonflies had enormous powers of visual temporal resolution (I darkly remember this as the correct term, maybe it even is), which means, the refresher rate had to go up to several hundreds of images per second before they reacted to virtual prey like it was worth the hunt or fled a predator like it was a real menace and not just some pixelated sequence of still images, while humans can be fooled into seeing movement by much lower frame rates. Cats were somewhere in between, closer to humans than to dragonflies.
Now talking about brain and processing, it took some effort to understand the question the OP asked.
Your brain doesn’t really have a “frame rate”. It’s not processing things as a series of still images. It’s processing an array of “pixels” (the signals from each rod and cone in your eye’s retina) where it can’t tell if things change faster than about 1/10th of a second or so.
The processing that your brain does is actually quite complex, and no one really understands it all. But your brain is constantly breaking down all of those little “pixels” and is automatically sorting them into identifiable shapes. For instance, if you pick up an apple and look at it, your brain not only sorts through the images from your eyes and identifies it as something round and red, but almost instantly identifies it as an apple out of all of the other things that are round and red that you have seen in your entire life, and even brings up things associated with apples like their taste, feel, smell, etc. And it does all of this almost instantly, and is constantly doing it.
If you can figure out exactly how your brain does all this so quickly, there’s probably a Nobel Prize in your future.
But one thing we do know is that it’s all “analog”. There’s no individual frames.
Oh, sorry, you meant the question the other way around? I think the brain is analogous, you have to fool the brain into seeing motion where none is by accelerating the frame rate until it sees “smooth” movement. The reactions in the brain (ETA: and in the eye! Think image permanence, for instance after looking into a bright light) are chemical (neurotransmitters, absortion, release…), the brain does not split reality into frames. I believe so, anyway.
ETA: Somewhat ninjae’d, but at least not contradicted.
*sequential still images, all movies, TV shows and videos are sequential still images, but they aren’t comics.
There are no individual frames in the same way, but ultimately neurons either fire or they don’t. So if something moves across your field of view, presumably there is some small amount of movement that is not detectable (exactly the same set of neurons fire), a certain minimal amount of movement is required to trigger a discrete change in the set of neurons that fire?
the term ‘‘moving images’’ won’t cut it since they’re aren’t necessarily moving.
Moving pictures was used to describe the Victorian zoetrope (literally “wheel of life”) by 1896. It more than a decade to shorten it to “movies”. Etymonline says 1912, but the earliest example I found was in an ad from the American theater in Elyria, OH, that ran in the Elyria Chronicle-Telegram, page 2, on March 22, 1909.
LITTLE WOMEN
First time in Elyria
You Have Read the Book
Now See it in the Movies
This is the oldest documented instance on the internet that I know of.
Movies is a good term and has been for a century.
Indeed, but they do not fire at a pre-set rate. There are waves in the brain (Alpha-, Beta-, Gamma-waves) to coordinate processing, but not in a frame by frame way. Those brain waves are much too slow for that, and variable too in their frequency, and they smear out.
The term refers to the device moving the consecutive still images, what I meant to say was there was no such thing as a real single artificial moving image, it’s just that some websites don’t know how movies work.
If there’s no such thing, why do we need a term for it?
The fact that a set of still images will blend into the illusion of a moving image has been known and made use of in various gadgets for almost two hundred years. My impression is that everyboy knows the difference and therefore there is no need to bring it up from scratch each time, anymore than we need to explain how a message board works whenever we post.
I’m confused by this and your subsequent comments. Surely the terms “movie” and “TV” and “video” mean exactly that, a set of consecutive still images that simulate continuous motion. Why do we need a “better” term, what would be better about a different word for exactly the same thing?
Ironically, there is such a thing as a true artificially-created moving image, but the same tricks are often used to make it appear to be standing still. Ever seen a laser show? They shine a laser at a rapidly-rotating mirror, to project a moving spot on a wall. Tweak the rotation of the mirrors in the right way, and you can make the moving spot trace out a circle, or a face, or other image. And do it quickly enough, and the human audience won’t see a moving dot; they’ll see the image it traces.
My mind is blown that the flip book was not invented sooner! At least I learned again that the proper term is kineograph. Let’s see how long I take to forget that again.