The second two place limits on the number of free creations you can do per day with paid tiers above that. Bing has no paid tier and has no hard limit to the number you can do per day, but throttles the speed after 15 runs per day.
Midjourney is paid-only, so I’ve never used it, but it produces some of the best quality–once it was best quality hands-down, but now SDXL and especially Dall-E 3 give it swrious competition in many ways.
There is also Dreamlike Art, which gives limited free daily credits for acess to a handful of Stable Diffusion models plus Kandinsky, which is basically “Russian Dall-E”. Kadinski is my least favorite if the AIs I use, though.
And there is Craiyon, which is pretty low-end by modern standards and I rarely even think about it, but it isn’t entirely useless.
Just for the heck of it I ran the prompt that created the alien with baby Earth image (Photo of a very small planet earth in a hospital delivery room. Earth’s mother looks distressed.) through Night Cafe’s “coherent” model, one used through much of the early part of this thread. Here’s the result:
As for the first Birth of Earth image, it was continuing on my recent attempts at rhyming images. I was aiming for something more graphic, more like (if not quite as explicit as) the Big Bird giving birth images that crop up different places.
Bing/Dall-E 3 heavily censors both the language you can use (from some naughty word list) and the images produced (if an AI thinks an image it actually produced has “harmful” content, it won’t show it to you) so you have to be circumspect about how you describe the scene and hope that it doesn’t see anything questionable in the result. (I don’t know how the Big Bird images were made).
This is the image closest to what I was aiming for in my Birth of Earth prompts. It is mostly what I wanted, except the Earth is much too big.
I largely agree with Darren_Garrison about the recommendations. Bing’s Dall-E based engine is free and renders some good images; good enough to keep you amused while trying out AI art. It also runs off Chat GPT so it uses very natural language for requesting a prompt and does the best job of understanding that style of prompting. I’m not a fan of the aggressive filtering, locked size and there’s some stuff it doesn’t do very well but it’s the best online intersection of cost and function.
I have a MidJourney sub and they used to be far and away the best but Dall-E and SDXL have pulled even in some regards and even passed it in some others. It does still have the edge in some use cases, it’s very “artistic” and it does a good job of understanding and mixing various artists’ styles. It also has a number of additional features such as inpainting and outpainting, variants, style creators, etc. But its $30/month price tag means you’ll need to find it useful over cheaper/free alternatives.
The “freest” platform is to run Stable Diffusion locally on your own PC. This requires a fairly robust PC and you’ll probably want an Nvidia graphics card (though the latest 7000-series AMD cards aren’t bad from what I understand). Plus some technical knowledge and/or patience to get it set up. But, once you’re running, you have unlimited renders, without filters, on a very flexible platform filled with knobs & sliders, and a whole community of Stable Diffusion models to choose from for whatever you want.
All of the AI image apps I use (Midjourney, Firefly, SDXL, Dall-E, NightCafe) have strong points and weak points. Combining and tweaking often gives you the results you want.
Sometimes I feature my cat Benny in my AI generations:
Oh,and I don’t get “Aloe on a badger”, “Rabbi in soup”, “Ant on a pill”, “Godzilla on King Kong”, “Faun on a peach”, or “Old guy on Groot”. Obviously I’m misidentifying some of the objects, but I’m not sure which ones.
Created after those, we have poodle on a noodle, Spock on a dock, Hedy on a jetty, Carnac in a Sarlacc, tomater on a masturbater, and Jarjar Binks with Lemmiwinks.
A new Carnegie-Mellon study calculates that rendering each AI image uses more energy than charging a smartphone, and generates more carbon than driving four miles.
Incredibly frustratingly, the article did not note how much energy we are talking about, it just used vague comparative terms.
Taken at face value, the average phone charger uses 5 watts an hour and takes 3 hours to charge a phone from empty to full, so let’s say 15 watt-hours. A gaming PC will use that in about… 2 and a half minutes, if my math is right.
Apparently average cost of power here in CA is 28 cents per kilowatt hour (vague figure from googling, good enough for this). So 28 cents per 1,000 watt hours. If my math of 15 watt hours is right then that comes out to half a cent per image.