Digital art creator algorithm website

As for the settings, the third is prompt-only, yokai parade kei toume. The other three have source images and the prompt Satsuki and Totoro Toume Kei. I have been doing a lot of playing around with trying to generate plays on the classic My Neighbor Totoro bus stop scene. I use a real shot from the movie, a horror drawing, and a photo of a physical 3D printed horror figurine. The three other images are one from each of those sources.

(I have an astonishing diversity of images generated from those three, but that’s a different post.)

Protip:
When you use an image as a template, download the initial blurred image that is created at iteration 0. Study it to see what details are kept and lost in the initial blurring. It may give you ideas about what to edit in the template image to remove weird bits or make things lost in the blurring more prominent. You can also the blurred image itself for use as a new template.

Look at the blurred images or sections of them and see if any part remind you of anything interesting. Use that image as a source with the prompt being what you think it is—the AI might agree with you. (Or tell it you want something completely different and see what happens.)

I thought this blur looked like a small dog or stuffed animal. The AI agreed and made a pretty good oil painting. (Also pretty good monsters.) I might later try it with “stuffed animal” or “furby” or similar.

I thought this blur looked like some monster head rising from water on a long neck and giving a very wide-mouthed scream. Or, alternately, a dog face close to the camera with the black nose distorted by forced perspective/fisheye. The AI wasn’t willing to give me quite what I wanted on either front, but a little editing of the template to clean/darken the “mouth” would probably help. Also, the failed dog face provided a new template that after a little editing (particularly the right side of the face) could generate a new dog, possibly a Terrier.

(Feel free to crop out and try any of my templates.)

OK, that’s absolutely perfect for Monty Python’s Flying Circus.

@DPRK, your “something that can not be described” isn’t showing in the thread for me (though the link still works).

1930s-style death ray

~Max

I’ve been trying to get across the concept to the algorithm of viewing from behind, viewing from the back, viewing from the rear, etc. I’m trying to avoid the problem of the difficulties it has with faces.

I haven’t been able to do it. For example, trying to render a sugar fairy from the back even with a starter image will get faces anyway, extra butts, or extra rears, or extra backs, and so on.

Is there a good word or phrase for ‘viewed from the rear’ that doesn’t involve a synonym for butt?

It still shows up for me… can anyone else not see it?

I found one web portal that accepted a blank prompt:


I guess that is just what the random-number generator happened to land on. (Nuked City at Sunset??)

“black is white”:

Hmm… so I tried “black and white”:

I would love to try playing with some of the more interesting (read: more complex) models but unless they run on a free portal like the one in the OP (Google Colab Pro isn’t free either) or some old GPUs I can scounge up, I am not sure I will be able to. Also note that some stuff like DALL-E-2 is not on Github so you need a special access code, which some people have: DALL·E 2 ; I’m not going to try running that at home in any case, but you can see that more complex models simply give much more possibilities to both the artist and the machine-learning tweaking expert.

This is what your post looks like to me.

Imgur

I’d love to be able to run the super resolution upscaler locally and free. I can’t find confirmation, but I’m betting that they use one called ESRGAN.

If you go here

and click on link “B”, there is a github repo with Windows + Mac + Linux executables. I haven’t tested it.

How about “facing away”, “walking away”, or similar?

“Walking away” does seem to help. Thanks!

Although I still have to tweak it some.

Facing away, even better maybe. But I did use a different starter image.

As I suspected, just darkening what I thought was the “mouth” on the image produced a proper screaming monster.

The simplest one is “screaming monster”. The oter two are “worm monster”. It ended up putting an alien astronaut in one and a planet in the other.

A Pile of Rags and Meat

Well, that’s what I asked for and that’s what I got. I like the extra flair along the borders. I also laughed when it did the little promo with this picture hanging over a desk, “Buy this for your home or office!”

Another Day, Another Way

Uhh… AI? You feeling okay there, buddy?

Awesome.

The output really messes with my head—I keep staring at small details and trying to figure out what are supposed to be, when they aren’t supposed to be anything because there is no conscious mind creating it. But some of them look like there was some artist actively creating them, especially some of the amazing results from using the kei toume modifier. For example, this one:

At first I thought the green stuff in the tiny diorama was seaweed and the scene underwater, but then I wondered if the blue and white thing was a tropical bird, making it grass, before reminding myself that it and nothing about the image is actually anything.

As for the ingredients of the image, I had a photo I took of a mushroom and decided it needed to be a gnome house. So I found a gnome photo on Google Images and pasted it in. The AI failed to see the gnome’s hat in the first try, so I went back to the original and brightened him. Using that image got interesting results, so I added a couple of more gnomes. This is the oil paint preset along with gnomes and mushroom home and kei toume. (Never mind that this image doest look like gnomes or a mushroom—some do.)

These are the blurry images (which I’ve been thinking of as “shades”) that can be used in generating images. One gnome and three gnomes. I don’t know why one render made the raspberry-like thing, but it is the one that resulted in this image of the miniatures, so it is proven useful.

Those images were resized with a free site I found. It doesn’t give mind-blowing invented details like the AI, but it may give better results than the resizer built into any programs you might be using. I like the b-spline, hate triangles.

(The first image is a 4:3 crop from the wide original, upsampled to 4500 pixels wide. The two additional crops are also upsampled to 4500 pixels wide. And of course the image I upscaled on Nightcafe itself was 4x narrower/16 times fewer pixels. Going from the very original output at Nightcafe to crop number three, around 1,500 to 1,600 times as many pixels have been added to the most magnified segment.)

Pegasus, the Flying Horse

I like that better than what it evolved into but turns out I gave it the wrong lyrics. I should try some of these and see what happens.

I wonder what it would do with Spacegooose’s drawings (Cosmic setting, of course).

I like big plums and I cannot lie.

In good news, the AI is apparently not senile:

Banana Chair Sunrise

Or maybe it is…

Person Woman Man Camera TV