Holy crap, this is some mind-melting stuff. Google’s “Deep Dream” software takes images, identifies certain features and builds on them, often arriving at results that are completely different from the source image. Some fine still examples here.
VICE just put up a story featuring porn pics run through Deep Dream… it’s pretty twisted, totally NSFW and contains at least one image that could disrupt your own deep dreams… forever
I downloaded the software last night and I’m going to try setting it up on my computer. I want to see what happens when I turn it loose on my sketchbooks.
Did this algorithm get frontloaded with the notion that it should see/find animal faces? I mean, it isn’t drawing flowers or skyscrapers, it’s inventing faces, most of them of the earthly but nonhuman design.
The “neural network” is “trained” by showing it a vastly large number of known pictures, and letting the AI software figure out for itself what their common salient features are. Apparently, the version that they release to the public had been “trained” with a large number of dog pictures, although there were other pictures in the training set too. That’s why you also see some birds, fish, and other strange stuff showing up.
Somewhere on-line I saw some other pictures, produced by a network that had been trained mostly with pictures of buildings. Then, given a picture of a forest, it turned all the trees into pagodas. I’ll come back with a cite if I can find it again.
I’m still looking for the one I saw earlier that showed a whole forest of trees turned into pagodas.
In the meantime, here are some other bits and pieces:
Inceptionism: Going Deeper into Neural Networks, June 17, 2015. Blog post from Google Research discussing a bit about what is going on in training their networks. Shows one example of one tree turned into a pagoda. Also, lots of clouds turned into strange fish, and other examples.