Digital art creator algorithm website

I’ve been retrying some of my old prompts and I have to say the “Stable” algorithm generally kicks the butt of what was available before.

You can also try going very simple, with vague one or two word prompts, like joyful fear, happy horror, happy fear, sad anger, ugly scary, ugly happy, terrible photo, beautiful terrifying, etc. I’ve done lots of those in Craiyon and Min-Dalle with very interesting (but 256x256 pixel) results. The output on Stable Diffusion can be more disappointing (it makes cartoon drawings more often, for instance) but it makes interestins stuff too. These three sets are “cute ugly”, “terrifying photo”, and “ugly scary cute photograph”.

Here are some Min-Dalle images made with things like “ugly cute”, “beautiful terrifying”, and “ugly scary animal-person hyperreal”.

I have been curious to try the “not recommended” medium resolution with stable diffusion, which costs six credits for one image or twelve credits for four. I decided to go with one of the prompt formulas that I recycled from “stable” that often produces very plesant, interesting landscapes. This is a fresh set at the default 576x384 “thumbnail” resolution:

And this is the 1024x768 “medium” resolution:

It even pops up a “do you really want to do this?” dialog warning you that the higher resolution might end up looking like crap and should only be used for abstract and experimental images. And the results are probably a bit less polished than the lower resolution, but I don’t hate them and don’t think that I wasted the three credits each on them.

That formula of prompts (with slightly different settings) produced one of my favorite Stable Diffusion images. I wanted it in 16:9, so I mirrored the two outer edges, then did some cloning to break the symmetry and make the mirroring less obvious. I then upscaled it with a Super-resolution algorithm and adjusted the color and contrast a bit. I call it “Stargazers”.

Nice!

I’ve done a few Olan Mills in Stable Diffusion. I think the floating head on the upper-left image is really an attempt at a floating head image and not a coincidentally missing body.

(Those are “puppet family portrait by olan mills”.)

So what happened to Min-Dalle? Used to be when I used it it would run immediately or the queue would be less than 10 deep. I recently decided to run some of its sample prompts through Stable Diffusion*, and now whenever I try to run an image I see a queue more than a thousand deep with wait times of several hours. Has it been “discovered”? Some sort of bug?

*As for the tests, Stable Diffusion does a lousy job with courtroom drawings, broccoli nukes, and Dali WALL-E, but excellent “Elmo in a street riot throwing a Molotov cocktail, hyperrealistic”.

I don’t get Night Cafe’s censorship rules or algorithm or whatever. I’m not a prude; I’m also not a raging pervert. I’m not trying to produce pornography but I don’t understand why, “Hot Tattooed Chick in Fishnets” gets blurred (even though it looks like it might have come out topless) but there are slews of nudies out there with full on nipplage and bush. I even saw a big swinging hog the other day.

What I’m trying to say is I want to see my hot tattooed chick naked, goddamnit! Although my multi-colored transsexual mermaid did come out pretty risque.

This (auto-AI-censorship) must be an issue with the Night Cafe interface rather than with the underlying published model. I noticed in some official sample scripts that it blacks out images according to a “customized trained NSFW classifier”.

I do not imagine that the training data especially included a lot of hard-core pornography or violence (though I have not tried to check) but in principle there should be no problem with nude models, or “David”, much less a non-nude “hot tattooed chick in fishnets”. Night Cafe is just censoring your outputs before displaying them.

I have had classical-style painting outputs to be blurred. For instance, trying to do Botticelli’s Birth of Venus in the style of someone else got blurred images. And recently a “skeleton family portrait” had one of four images blurred, and I can’t imagine what was censor-worthy in that one.

Another word that I have discovered to be blocked is “massacre”.

A new senselessly censored image

I don’t know what it thought it saw, but it refunded a credit, so I got three images for one instead of four for two.

A thought struck me while I was trying to sleep.

Here’s a version of Stable Diffusion that makes seamless tiles. It has worked flawlessly on every prompt that I’ve tried.

SD may have no clear idea what a “krasue” is, but it definitely leads to some interesting images in complex prompts.

(Two of those show some hint in recognizing that strings of guts are involved.)

All of those are from the prompt

“Krasue family parade | full-length portrait | crackled oil painting | Dan witz | margaret keane | Zdzisław Beksiński | kei toume | junji ito | dungeons and dragons guidebook | postcard”

which is a prompt set originally built for “coherent”. I’m sure there are terms in there that are unused, but I haven’t tried to trim them out. Replace “krasue” with anything else you please. (Adding “night sky with galaxies” into the prompt set produces additional interesting results.)

In the interest of equal time:

https://creator.nightcafe.studio/creation/s5FY3Yc6Tf9iCHBOaBCr

Amusingly, all the folks in the poster seem to be wearing uniforms, but it’s a different uniform for each of them. Which rather defeats the purpose of a “uniform”.

Early on in “coherent”/Disco Diffusion I tried making Danbo but got only a vague Danboishness (it was mentioned somewhere in this thread). He is a much better subject in Stable Diffusion, and I have put him in a wide variety of prompts. Here are some of the better results.

(One of the more interesting mods I discovered while doing this is “decopunk”.)

For Stable Diffusion at least, one tool for supposedly dealing with this is making a note of the parameters (especially the seed), for then you can generate variations (specifying the amount of variation), variations of variations, etc., and finally take weighted linear combinations of variants to get the exact elements you want.

One thing I have noticed about SD is that the same prompt with the same seed produces the exact same image (at least in my small sampling of doing it). DD produces anything from fairly similar to wildly different images from the same prompt and seed, never a completely identical image.

(On a similar note, I did a test with same prompt/same seed but trying the different methods—KLM_S and others. Varying that aspect produced a group of extremely similar but not quite identical images.)

off the top of my head, besides the prompt and seed there is the choice of sampler, scale coefficient, and number of steps, (possibly “mixed” versus full precision makes a difference, not sure), then the variation subseeds and weights, possibly other stuff depending on which version of the script is used.

That sounds unintended; the point of a “seed” is that the results be reproducible.