Today I got fooled by an AI video

The barnacles are cursed.

I heard a stat from a respected source (sorry, don’t have it off-hand) that 50-60% of postings online are ai/bot generated. It posted a message board argument - a raging argument - whereby both sides were bots. Can’t be sure this is true, but I certainly think it’s plausible.

Focus on positive reinforcement, consistency, short training sessions, and making it fun for both of you!

At work, watching without sound, it looks pretty amazing.

Kling is pretty cheap, too. On a free account they give you 166 credits a month. It costs 20 credits to create a 5 second video from a still image, 40 credits for 10 seconds. It costs only 5 credits to lipsynch an already made 5 second video, so you could lipsynch 33 clips a month for free. You sign in through Google or Facebook, so if someone happened to have two Gmail addresses and a Facebook account…

If you subscribe it is currently $79.20 for a year/$6.60 for a month for the lowest tier and that provides 660 credits per month. So you pay 5 cents to lipsynch a 5 second video. (The bigger the package you buy the less it costs per creation.)

Thank you, kind and helpful sir.

Out of curiosity, in the initial video clip from still image section, did the transition from the initial solemn/reflective affect into big smile come about because you specifically prompted it or was that the AI’s unmediated output?

I ask because there’s something really freaky about the smiley face with the content of the lip-synced dialogue samples.

The prompt was something like “Shy girl breaks into smile and winks”. (Instead of a wink at the end, it gave a blink at the beginning.)

Next up in AI video, Stable Virtual Camera.

Don’t trust anything posted by an account called “Reality Shorts”, which should really be “Unreality Shorts”!

There appears to be at least two similarly-named accounts on Facebook, both posting reels featuring AI-generated garbage edited into real footage.

Revisiting Luma Dream Machine’s beta AI for generating audio to match video. I had it make audio tracks for all of my previous image-to-video tests there, here is a reel of a few dozen of them.

It seems like it is using the “feel” of several different languages in creating its gibberish audio, not just English. (I’m assuming it is as much gibberish in all languages as it is in English.) It allows you to describe what kind of audio you want, but I let it decide everything on its own in most of these

with or without potassium benzoate?

The latest fad for fans of AI-generated videos appears to be laughingly bad clips of animals saving random vehicles from certain doom. The majority of replies are along the lines of “God is great!”

How bad are these? The first one I saw featured a school bus which changed from a ~25-year-old Freightliner to a modern International. The change was obvious, given that the bus is seen head-on for the majority of the clip and the manufacturer’s logo changes about ten seconds in.

We were taken in by one yesterday, at least for about thirty seconds.

Mr. brown owned and loved a Karmenn Ghia many years ago. I clicked on a video that said VW was coming out with a new Karmenn Ghia. It was a nice retro-looking vehicle, complete with chrome bumpers just like in the early 60s.

It was the bumpers that clued me in. I know that cars had to switch over to a kind of impact-absorbing bumpers in the 70s and the cool chrome ones were extinct. The whole video was one big AI fake, and I’d link to it, but I don’t want the originators to get any more clicks. People in the comments, however, seemed to really believe that there was a new Karmenn Ghia.

It’s getting scary.

My mom is really in to YouTube now, since we’ve got it set up on her Roku. At first she was pretty much just watching content from channels she subscribed to but of course YouTube is just a big suggestion algorithm, so there’s a ton of recommended videos.

We started watching a video about “15 things Costco employees say you should never buy at Costco” and it was clearly an AI video. The voiceover was much better and less robot-y than I’m used to when it comes to automated videos but within a few minutes they showed a package of food and the text was all nonsense, so I knew. Then as the video went on, none of the footage was actually from Costco.

But, it actually did seem like good advice on what not to buy from Costco, in the 5 minutes we watched. I’m sure the info was carefully scraped and compiled from other sources. I’m also pretty sure it would have gotten less informative as it went along, like the images got less specific.

I talked to my mom about AI videos and why this particular one was AI and what to look out for. Suggested maybe she stick to her subscribed channels. It’s not like this particular video was going to do her any harm, but I feel like if we make the choice to watch them they just get more credence. And eventually she will start getting more AI stuff that truly does her harm.

I think our kids and our parents are now being babysat by misinformation robots. We are truly in an Idiocracy.

Well I would say there’s no shame in being taken in by these things. Of course I would say this, as the OP…
But the point is, the number of “tells” is going down all the time and soon will be basically nil.

So the focus should really be more on what we do with the information we get from a video, rather than depending on spotting artificial videos. If something is surprising to us, or makes us angry at a politician or whatever, do we have the voice that tells us “How can I double-check that this was a real event?”

If there was ever a time when “seeing is believing” was always rational, it isn’t now.

Oh absolutely 100% no shame in being taken by it! Or advertising or anything. The science and manipulation tactics behind it all is way more complicated than any average person’s abilities to spot it.

The crappy part is that the same technology that’s showing us barnacles and Costco hacks is also being used to manipulate voters and sow hate, and that’s such a bummer.

Or something that isn’t surprising, that plays on your prejudices. Selected by an algorithm that knows your prejudices.

The unreliability of photos and video being taken for granted in the future has long been a staple plot point in well-written science fiction. Video v6eing inadmissible in court cases, that type of thing. (In badly written science fiction, the fakery is a suprise, an example being people being impressed by Wesley’s Picard voice sampler in the early ST:TNG episode The Naked Now.) This just happens to be one of the few cases where the promised (or warned) SF future actually catches up with the present.

Yep. The first example I thought of was The Running Man, where video is used to:

  1. Convince the public that Arnie was responsible for the massacre
  2. That Arnie lost his fight to Jesse Ventura
  3. That previous winners were celebrating on a tropical island, instead of taking a more…permanent rest.

What’s interesting of course is that the audience is convinced of the truth with…more video…the notion that the audience might not know which video to trust isn’t considered.

(As you can tell by me using the actors’ names, I haven’t read the book…perhaps in the book the video is accompanied by confirming details which wouldn’t work so well on the big screen?)

It has been a very long time since I’ve read the book or seen the movie, but as I recall any similarly between the two was purely accidental.