Bob Ross DeepDream Video

Someone used DeepDream technology to make this video of Bob Ross: https://www.youtube.com/watch?v=5DaVnriHhPc

DeepDream:

Dude! That’s an LSD Bob Ross. Bring back my THC Bob Ross!

I found that a few months ago; I’ve watched it like 12 times.

The Canvas. It spoke to me. :eek:

Wowza! I never liked his voice. But that’s just creepy as hell.

Too much high-frequency flashing going on. For pure Deep-Dream mellow action, check out The Grocery Trip.

I wonder what the training images were for the Bob Ross one. Clearly DOGS for the grocery one.

What’s with all the random animals? Why does the algorithm associate Bob Ross with dogs and centipedes?

This is wrong. Bob Ross should be comforting, not disturbing.

I much prefer the Hendrix one where they used Moebius’s paintings.

The Bob Ross video just made my eyes hurt, the grocery shopping wasn’t as bad but still not very watchable, the Jimmy Hendrix video was pretty cool though, sorta took me back to the early days of Mtv

Legit

Because the software has been trained to see animals. The original intent was to create a system that could classify images, e.g. you show it a picture of a dog and it correctly identifies it as a dog. You start by showing it many images with/without dogs and telling it that these are (or aren’t) pictures of dogs; you do this with many and varied pictures so it learns what dogs look like and gets really good at identifying them in new pictures it hasn’t seen before.

Once your software has been trained to be good at seeing dogs, then you make a crucial change. Now when you feed it a new picture, you tell it to identify things in the picture that look a little bit like dogs - and then you have it modify those features in the original picture to look a little bit more like the dogs it thinks it sees. Then you iterate that process until the first faint hints of dogs in the original image have been gradually morphed into very distinct (if highly distorted) dogs and dog parts.

As FoieGrasIsEvil’s Hendrix/Moebius video shows, you get very different results when you train the software to be good at seeing different things. In that case, the software wasn’t “trained” with images of dogs, it was trained with specific Giraud paintings - so everywhere it looked, it decided it saw Giraud paintings.