It fascinates me how humans can filter individual sounds from noise. If you are in a room where there are, say, 10 conversations going on. All are equally loud, but we can quite easily pick one and listen perfectly to it.
I wonder if the same can be done with vision (I mean apart from the obvious ability to recognise ‘things’ amongst the light.)
Imagine a TV with a 120Hz screen (the picture updates 120 times a second)
Imagine 4 programs all playing at 30frames per second.
Now imagine each 4 Hz in the 120 hz. put program 1 on the first, program 2 on the second, etc…
so the 4 programs are playing all on the same screen.
I wonder if we could easily filter 3 out and watch one.
if that is easily possible wouldn’t that be a great piece of technology?
My WAG is it wouldn’t work, rather our eyesight wouldn’t be able to pull out any of the 4 of our choice.
As for discriminating among different sound sources, there are other factors that come into play. For instance, directionality makes it easier to separate sounds. Your single monitor wouldn’t have that.
Also, a single video is perceived much the same way as what you propose, a rapid series of still frames giving the illusion of action. I think our eyes/brains would interpret it either as a single crazily-edited video: or as somekind of virtually unintepretable visual noise, in that there would be no frame-to-frame continuity which our brains could use to construct the illusion of motion/action.
And you also have the sound for each of the four programs. What are you going to do about that?