So, apparently AI can create amazing optical illusions

Got any proof or cites that establish the veracity of this statement? Not conjecture, not opinions, not the plot of a movie (it is the Cafe Society), but actual studies with validated results. If not, your angst is borderlining on setting your hair on fire and running around.

Well, if you had bothered quoting the entire sentence instead of chopping it up to remove context, you’d see that what I said was:

Since this is obviously stated as speculation, no I do not have “actual studies with validated results” based upon a future capability, although this is an obvious extension of social media algorithms (which are effectively machine learning even though they are rarely referred to as “AI”) that are already being used to select and promote content that stimulates outrage and fuels misinformation. You are free to characterize this as “angst…borderlining on setting your hair on fire and running around,” but it isn’t as if I am a lone voice screaming in the wilderness; there are many experts in generative AI, propaganda theory, and cybersecurity who are warning about the potential of generative AI to manipulate public opinion and impact democracy in numerous ways:

https://ai100.stanford.edu/gathering-strength-gathering-storms-one-hundred-year-study-artificial-intelligence-ai100-2021-1-0#misinformation

Stranger

SEND NUDES, LOVE and OBEY written with color contrasts in pictures is not an optical illusion.

An image which looks like one thing when viewed one way, but something else when viewed in a different way, is absolutely an optical illusion. That includes if the two ways of viewing are close up and far away, and if the two things it looks like are a group of puppies and a pair of words.

I never underestimate the potential for shared distress when it comes to new technologies. Autophobia was quite prevalent in the early 20th century.

This issue isn’t that this is just a novel technology; it is something that has the capacity to shape they way we interpret the world, amplify misinformation and conspiranoia, create divisions of knowledge and belief, displace critical thinking and studied knowledge in favor of easily digested ‘factoids’ of questionable veracity, and ultimately to optimize propaganda and manipulation of public discourse in ways that even the best ‘Mad Men’ and PR flacks could only dream about. Chatbots are literally deception machines which are designed to make you believe that there is a human-like thinking intelligence behind what is essentially a stochastic language pattern generator, and image-generating AI takes the hijacks that most people assume they can trust absolutely to directly manipulate visual perception.

This isn’t just a thing that replaces horses and puts buggy-whip manufacturers out of business; this is a machine for “manufacturing consent” that even Lippmann would find abhorrent, and unlike the media it doesn’t take a large organization or vast resources to deploy and control it. That these capabilities can be used maliciously isn’t even in question; we already know that they can. The question is just how fast they will be adopted by bad actors and whether there are any effective countermeasures that can be applied, because it is clear that legislation and regulation, even if they could be enacted in a timely fashion, are inadequate to address the problems that these capabilities pose.

Stranger

Effective countermeasure is called critical thinking and collaboration with other verified sources. That is until the time said verified sources become the target of ideological capture and thus lose credibility. The technological genie is out of the bag and every actor that has any incentive to exacerbate division in their rivals’ societies will do so.

The linked articles are about things like AI generated text & messaging and AI rendered deepfakes. It’s like I said earlier, pictures of kittens with “hidden” messages don’t even ping on the list of realistic AI issues. They don’t even rank when we’re specifically talking about AI generated art. You don’t need to try to sneak a word/image into another still image to create useful propaganda. In fact, there’s other well-tested and much more effective ways to get someone to buy something, think something or hate someone. We can sneak words and images into photos now; we just don’t because it’s a waste of time because it doesn’t do anything. Heck, there’s nothing stopping you from staging seven people like the OBEY photo even without special artistic post-processing trickery.

There’s plenty to be wary of when it comes to AI. The stuff in the OP, even as a ‘proof-of-concept’ just ain’t it. Worry about things that matter.

I think I like the subliminal gay sex. Has an appeal to me for some reason. And the cats are cute too.

How is that different from what Fox News has been doing for years?

At the point I saw this Facebook post, the representatives of 135 thousand minutes had reacted to it.

That’s because most of the descriptions of how to see them were rubbish…it took me a long time and then when I saw it, it was like “Ohhh, is that all it was?!”. I put it in a spoiler as it’s off-topic:

How to see magic eye
  1. Hold an object, let’s say an apple, about a foot in front of you.
  2. Now look at something further away e.g. a TV screen
  3. Notice that when you aren’t focusing on the apple, there are “two” apples (because your left and right eyes are not converging on the apple). Do this a few times; look at the apple, look at the far away object, notice what is happening to the apple.
  4. Now try to do this without looking at any other specific object i.e. look at the apple, and manually make it “split” into two images. If you can do this, you’re ready.
  5. Look at a magic eye picture e.g. this one, notice that there are repeating elements in the image, like wallpaper. In this example you can notice regularly spaced yellow rings.
  6. You need to do a “split” like you did in step 4, so you’re seeing two magic eye pictures. Then you need to make those two pictures overlap such that the repeating elements overlap. (NB: to do this comfortably you may want to make the image on your computer or phone smaller). As soon as you do this, the 3d shape should pop out.
    Good luck!

I wonder if this poster was inspired by this hack? (It probably wasn’t created with it, though.)

Artists have been pulling tricks like that since long before AI, or even computers, existed. The medium might be getting a surge in popularity, thanks to AI, but that Fallout image could just as easily have existed without that. Especially since a skull isn’t a particularly difficult image to get.

For instance, 1892.

I got my degree in psychology and the idea of “subliminal messages” was bogus then and if there are any creditable studies in peer reviewed experimental psychology journals, I would like to hear about them.

The studies then showed deep flaws in the methodology, and I really doubt that has changed.

The term subliminal comes from the term limen, which is the curve formed when you map a sensory input such as sound against whether the subject perceives the sound. At the midpoint of the curve, you are equally likely to get a correct positive or negative response, or a false negative or a false positive. Have you ever been in aquit hous and think you heard something but it was very quiet? That was a false negative subliminal perception.

It has everything to do with the first level of perception, and nothing to do with all the post-processing our brain does to understand what our senses are telling us. That our brain would be busy pattern matching text that is vaguely embedded in an image while our brains are busy patern matching strikes me as being as plausible as vaccines causing autism

Do you not mean a false positive? (Also, not “aquit hous”.)

<ninja’d>

Oops! Yup

A few days ago I saw a Magic Eye picture that was moving! I could see it only with some difficulty and it was just a blob by the end of the clip at the time I could make it out but it was a moving 3d image.