Stephen Thaler's Creativity Machines -- Hype or The Real Thing?

While doing some googling for the NDE thread currently going on in GD, I chanced upon the research by Dr Stephen Thaler. Basically, he’s simulated the ‘dying’ of artificial neural networks by severing their connections, and noticed an odd thing: first, it seems as if the ANN ‘relives’ past activation patterns – it recalls what it has been trained on (i.e. its ‘life flashes before its eyes’, in a manner of speaking). Now, that’s intriguing enough, but at the very end of the ‘dying’ process, it appears that the ANN generated genuinely novel activation patterns – it produces memories it never had – it confabulates, so to speak.

What Thaler then did was to attach a second, unperturbed, ANN to the first, to observe the output from one that was continually perturbed (though not to the point of ‘death’). The first network, which he calls the ‘imagitron’, had been trained on some dataset, and the perturbation causes it to output novel concepts in some way related to that dataset; the task of the second ANN then is to judge the quality of these outputs, reward ‘good ideas’ and punish bad ones.

Using this architecture, Thaler has apparently created (or rather, had its machine create) novel designs and ideas in many fields – ranging from the composition of music to the Oral-B CrossAction toothbrush. He claims that his architecture is much more effective in coming up with novel ideas than fuzzy logics or even genetic algorithms. Here’s an article going somewhat more into detail, here is the homepage of Thaler’s venture, Imagination Engines Inc., and for those with half an hour to spare, here’s a documentary about his ideas. I should warn you, though: Thaler’s claims occasionally range from the grandiose to the downright nutty, and his style is sometimes very reminiscent of, well, that of certain TV evangelists and other gurus.

This brings me to my question – there’s certainly a great deal of self-aggrandizement here, and more than one of his claims seems rather overblown. But still, given the measurable successes of his creative machines (the article I linked to contains an anecdote: “His first patent was for a Device for the Autonomous Generation of Useful Information, […] His second patent was for the Self-Training Neural Network Object. Patent Number Two was invented by Patent Number One.”), it seems odd that I have heard nothing about this until now. I mean, a machine that composes music? That designs toothbrushes? I used to own one of those toothbrushes, dammit.

So what’s the deal, here? How is his work perceived in the larger AI community? If even half of his claims were correct, it seems he’s a good deal ahead of most of the rest of the field. So why is there such a mismatch between the alleged success of his work and the attention it has received?

I’ve nothing to add on the technical side of the subject (although this is something I find very interesting). I just wanted to add that this sounds in some ways quite similar to one of Salvador Dali’s creative techniques - he would sit with a spoon in his fingertips, holding it over a tin cup or plate - as he drifted off to sleep, the spoon would fall from his grasp and wake him up again - thus he could spend longer on the fringes of unconsciousness - and this would apparently bring new ideas to his mind.

Hm, that’s an interesting connection to make. Perhaps along similar lines, Nostradamus reportedly used to stare into a bowl of water in order to enter a trancelike state before writing down his ‘predictions’. Maybe using some source of randomness facilitates creativity?

On another note, here are samples of music written by one of Thaler’s machines. I wonder how much human input went into the making of this collection – certainly, if you have a computer generate a couple of thousand songs more or less randomly, having one or two turn out listenable is not that great a feat, so using excessive human censorship diminishes the impact somewhat. Still, listening to a piece of music that wasn’t written by any human being is somewhat eerie…

Hmm, doesn’t anybody have anything to add? No AI specialists out there willing to weigh in?

I saw his stuff back in the late 90s. It seems to be technically feasable. It would appear from the lack results that it is not the panacea he claims. While he may be correct about some of the ideas, I think a more rigorous approach is needed.

I have done a lot of studying and I can tell you his ideas are either not accepted or not apart of the general neural network discussion. This field would not be apposed to new ideas. It is after all a rather ambitious field.

For some reason he hasn’t gone through the typical neural network journals. So I can only assume the above. The rigorous discussions continue and the field is truly exciting.