A Valley girl, a material girl, a cover girl, an American girl, a manic pixie dream girl, and a denim-hating girl.
The denim-hating girl is still wearing denim.
A Valley girl, a material girl, a cover girl, an American girl, a manic pixie dream girl, and a denim-hating girl.
The denim-hating girl is still wearing denim.
Hence her expression
Play the home game and match each description with the MJ-generated lady.
Left to Right each row:
American Girl || Manic Pixie Dream Girl || Valley Girl
Material Girl ||Denim-Hating Girl || Cover Girl
MJ really lost the plot on the denim-hating girl
I’ve been playing around with different Japanese fashion types
And character types
(Drawn from a list that I had ChatGPT draw up)
Interesting that you finally rolled a black girl. [“Rolled” as in dice, for clarity]
I was 1 for 6 with that quiz. Not sure whether that reflects badly on me, or on the AI.
In one sense the AI did a good job of not going over the top with e.g. “COVER GIRL!!1!!” instead of “cover girl”. In the opposing sense, if the result is too generic, the point is lost.
Contrast with how @Darren_Garrison’s first denim-hater has a really massively hugely unhappy face versus the small objective horror of being forced to wear denim pants. Clearly that AI likes to amp up the adjectives, whatever they are.
Thank you all!
If I have a meta-meta issue to explore it’s somewhere around “What can we indirectly conclude about the training data in specific and the internet in general by probing for, then observing, the cultural biases of the AIs’ outputs?” As noted here:
Which suggests a different line of inquiry. In general the women (and to a lesser degree men) the Ai’s generate are attractive unless prompted to be otherwise. There are rather few of the women images of any age I’ve seen that I not want to ask out were they alive.
What happens with the various words leading towards “morbidly obese?”. Does it do an accurate job rendering the “plump”, the “fat”, the “curvy”, etc., or do they come off like cartoonishly inflated exaggerations like the cite upthread of a future 5025 family eating their human chow while the brainy pets look on?
Some more gothic horror using a different offline technique. It takes some doing to get it to intentionally make imperfect images.
Eight paws?
Also, that’s not a valid Rubik’s cube, since it has two yellow faces.
I was wondering if “American girl” might be influenced by the line of dolls/books, which are fairly diverse.
Well, “Denim hating” was a trap that could throw off all of your answers. Cover, Material and Manic Pixie seemed to ping for me. As you said, not screamingly overt but I get it. American is pretty generic but Valley missed the mark aside from an 80s vibe.
Without making a bunch of examples, in my experience it’s easier to make a person chubby, fat, etc than it is to make a woman unattractive. You can prompt for plain, bland, and other such terms but there’s a very strong model bias towards “pretty” in most age groups.
Keep in mind for some of these that you’re better off prompting for the description than just a label. If I actually wanted a “denim hating girl”, I’d ask for something like a teen girl in a silk dress reacting poorly to being handed a pair of denim jeans… and so on from there.
Copilot is so pathetic it refuses to try.
ChatGPT seems willing to run it, but is glitching today, “getting started” on images but never actually generating them. (I tried in two web browsers and the app).
Sora made this.
Fat
Ugly
Midjourney had no complaints about “Photo portrait of fat woman at an arcade” but 75% of them are pretty and #2 suffers more from angle and expression than anything else.
Hopefully it’s obvious that I’m not implying that you can’t be overweight and attractive. I’m just saying that MJ’s natural bias is towards making attractive women even when you move away from conventional media norms.
On a different topic, I recently ran across some images I made more than a year ago using Bing/Dall-E 3, inspired by an old Peanuts strip I randomly ran across somewhere, and made new versions:
Old:
New:
I kind of like Lincoln drinking through the straw.
They are both famously rather dour personalities, but Lincoln definitely gets the more human treatment here, both old and new, while Beethoven is just an unremitting mask of unhappy. Then again, we can’t see the scoreboard from here; he might’ve been gettin’ smoked.
Agree that Lincoln drinking from the straw is a masterwork. Lots of expression in that face. He’s deep in contemplation of some cosmic truth or matter of great statecraft. Or been up about 24 hours on a bender; hard to say which.
They both seem a bit nonplussed by their hotdogs. But Lincoln is going to enjoy his; Ludvig vill not enjoy hiss dogk!
This morning I told Sora: ‘Phenol Barbie Doll (Phenobarbital pun)’ It returned two images. 1) A Barbie holding a large prescription bottle of pills, with the molecule symbol (hexagram with a bond to OH at the top) with ‘Phenol Barbie’ in the Barbie font; and 2) Barbie in her box, holding the molecule model and ‘Phenol Barbie Doll’ on the front of the box.
I was disappointed. I don’t know what I expected, but the images are too lame to save and share.
“The Council of L. Ron Hubbard.”
ChatGPT. It had no problem depicting L. Ron Hubbard but it wouldn’t put in Tom Cruise or John Travolta. It “added” them if I asked for them, but they didn’t really resemble Cruise or Travolta much.
Was that the exact prompt, and the bot figured out on its own that it was a pun on “Elrond”? Or did you have a longer prompt that included that?
The prompt was:
“the council of l. ron hubbard. parody of the council of elrond. l. ron hubbard and a group of prominent scientologists wearing fantasy-type clothing meet around an ornate table.”