I’d like you all to listen to my wonderful discovery. (Movie compression may already use this technique, which will make me look quite stupid, but here is my idea.)
Basically, I understand that a “movie” is actually many still pictures played back quickly, making it look like actual “motion”. These images are stored in groups of pixels. Well, if you’ll notice, from one frame to another, many pixels will remain precisely the same color that they were in the last frame. They don’t change. Well, my idea is, rather then save each image of a movie, you only save the “motion”. If one pixel remains exactly the same color from one frame to another, then you simply do not save any data for it. The software program then realises that no data has been saved for this “pixel”, and it will automatically know to leave it the same color that it was in the last frame.
Do video compressionists already use this technique? Am I behind the times? Or is this a breakthrough?
With the breakthrough of digital cable tv, in order to increase the number of channels etc, they use this same method. If you have digital TV, you may notice that sometimes the screen ‘freezes’ or patches of the screen ‘block’. this is where interference such as wheather has affected the transmission, and the digital decoder is waiting for a signal to tell it to change the pixels.
Because the decoder needs to work out where to put the new signals, there is split second delay from when it recieves the signal to when it appears on the screen. You may notice this if you have an analogue tv next to a digital tv showing the same thing. The digital one will be a frame or two behind. (This can happen if you link the tv to your music system, digital sound and analogue pictures!!! makes you feel dizzy!)
Sorry to dash your dream of making millions. Keep trying!
AC gas discharge plasma. We used this technique, or perhaps one very similar to it, at a company I used to work for in the dim past, as a means of refreshing a display itself. The firmware would compare an incoming frame with the currently displayed frame in a buffer. Then it would build a table that was sent to the proper driver chips telling them how and which individual pixels to update. Pixels that required no update were simply left in their current state. This significantly improved response time and actually decreased downstream processor requirements enabling us to decrease power consumption at the driver chips themselves. A huge improvement when you may have 2048 x 2048 pixels to address at 60 frames per second.