How far away do you reckon we are from this kind of technology?

I was daydreaming in cartoon pictures on the train this morning, when it struck me that my inner canvas (and presumably most other peoples’) is able to translate a story or idea into animated pictures very much more quickly and easily than I could ever produce those ideas as a video animation, even if I was a skilled animator with clever tweening software and so on.

So… somewhere in someone’s technical roadmap, I reckon there must be a concept for a system that can ‘daydream’ a concept or story into visible form. You describe the characters, you describe the action (all in plain speech) and the machine produces it in the same sort of way that the mind’s eye of a human listener might, except that it produces something that can be extracted as video and published.

Obviously, high quality natural language interpretation is absolutely essential to this idea, but what else would we need that we don’t currently have? How far away is this sort of tech, do you reckon? Or isn’t it ever going to happen?

You can kind of already do that, you just have to program it and do all the prep work first. Most animations like Mickey Mouse, Cars etc are done this way. They can instantly attach motion and sound via sensors to scripted rendering programs. Blender is a very basic free version that does this, where you build a 3d model of the character, define parameters like motions, gravity and physical limits and you’ve got animations. Of course, this is done via physical motion and not brain waves, but there is tech out there that is bridging that gap if it hasn’t already. :wink:

A crude version of this exists now: If you can type, you can make movies

Not necessarily; another possible way to pull it off would be a sufficiently detailed and processed scan of the brain’s visual cortex. They can already get crude data from there; are you thinking of a large or small object, or light or darkness. In theory, it may well be possible to pull your imagined image right out of your brain directly and put it on screen. The odds are good it would require some kind of implant though, given the inherent difficulties of finely scanning brain tissue though skull and skin.

Understood, however, that requires a good deal of skill and planning in the initial setup, which imposes limits on the range of animation that can be realised without revising the setup.

If I say “Homer Simpson is eating whilst driving. He accidentally bites off a piece of the steering wheel, but chews and swallows it without noticing”, the chances are, you just saw that scene played out in your mind’s eye - not necessarily the same in detail as the one that played in my head, or anyone else’s - but there was a more or less real-time visualisation process going on. That’s the sort of thing I’m hoping for…

Thanks for that link - interesting… As you say, crude, but it’s a step in the direction I had in mind.