The latest video from this channel: Current US Politics on YouTube popped up in my feed today and I clicked on it, half listening to it while doing something else. Eventually I realized that the info seemed pretty farfetched, so I started paying closer attention and actually took a look at the tab.
Surface level it looks legitimate enough. I recognize the guy in the video, Glenn Kirschner, he’s a lawyer who posts videos regularly, many of which I’ve watched over the past few years. But I notice that there are no comments under the video; they’re turned off, which is unusual and not something I’ve noticed on his previous videos. I click on the link to take me to the channel’s home page and there are only four videos, the earliest posted two days ago. It’s not Kirschner’s primary site, and while it’s not unusual for content creators to have side channels, there’s no reference to it on his main page.
Some quick googling on the topic of the video only turns up links debunking the information. So I’ve pretty well established it’s a fake. But it’s disturbing. The video itself already had over 180,000 views and the channel already had over 10,000 subscribers. I know there’s a shocking amount of utter bullshit posted on YouTube, but this is the first time I’ve seen an established content creator’s identity lifted wholesale to disseminate it. Is this just some new hellish AI trend that I’ve missed? How is it still up after two days? I reported the site for impersonation, surely someone must have done so on the earlier videos as well. As a shared reality becomes increasingly a thing of the past, I find things like this, which intentionally undermine it frustrating/enraging/disheartening.
Do you have actual views of all those comments, or are they not accessable? It might be that he bought a high volume clickbait site, stripped out the comments and renamed it.
I’ve been seeing similar AI fakes as well. These are very disturbing because they pose as someone you trust reporting on fake events you would be interested in. I saw one where Barron Trump appeared in front of a judge and he acted snotty and the judge dressed him down. Very disturbing, yes.
It’s probably just a deepfake… report it and move on. Sadly, YouTube isn’t some carefully vetted truth service, it’s just a big commercial cesspool of millions (billions?) of random videos. They make money from your views even if it’s fake. They really have very little incentive to properly vet or curate their offerings, and it’s only going to get worse as AI gets better. Bots can generate videos by the thousands and watch and subscribe to them, and real humans will become rarer and rarer there (and everywhere).
Might be an opportunity to mention Nebula, if you want a curated alternative:
It’s a competing streaming service made by some former YouTubers, and seems relatively bot-free. It is a paid service, though, with far fewer channels and videos, but what’s there is relatively high quality and not full of fake spam.
IME, the best way to avoid AI slop on YouTube is to curate your feed. Subscribe to producers whose stuff you like. If you see a video on your homepage or sidebar that looks AI or clickbaity, click the options and select “Not Interested”, and when it asks why select “I don’t like this video”. You can also select “Don’t Recommend This Channel”.
I’ve seen Nebula pitched by a couple of creators I like and respect, but haven’t checked it out. Unfortunately I’m frugal to a fault. I realize that’s a part of the problem. Wanting something at no cost and then being frustrated when the ways free services are monetized are problematic, likely exploitative and often lead to a significantly inferior product.
Thank you for the offer of a guest pass, but it’s probably better given to someone more likely to support the platform going forward.
I totally get it. And that’s not on you… it was the ad-supported model that mostly kinda sorta worked in the early days, the first decade or two of the popular internet. Then, like anything else, once enough greedy people got wind of it, it was eventually taken over by click farms, bots, AI spam, etc.
It’s gotten so bad for me that I even stopped using Google search, and started paying for an independent search engine instead (Kagi).
In the first decade, the challenge was putting things online. The second decade’s challenge was finding and indexing them. The third’s was separating the content from the chaff. And now with rampant AI slop, the signal to noise ratio has gotten dramatically worse, and there’s no good way to detect real vs fake content anymore… I don’t see a way out of it except for careful manual curation. Maybe it’s time to bring back early Yahoo, when every listing was manually added by a person
For what it’s worth, I think they regenerate every so often, so no problem at all if you want one.
But no worries if you don’t want it. I don’t use Nebula much either. I subscribed for a year but I won’t renew once it expires. I just like YouTube’s features better. I do pay for YouTube Premium though, and carefully manage my subscriptions to only “good” creators and channels.
Looks like it finally got banned. (Was still working yesterday).
The moderators will always be at a disadvantage here. Ban one and a hundred more come back the next day. Bots can scrape the latest news headlines and prompt an AI to generate controversial deepfake videos about them, from every perspective, with all sorts of twisted facts, all to increase view count. Government propagandists do it too — both foreign and our own administration.
Personalized outrage bubbles are the (sad) future. The algorithms have gotten too good at knowing exactly what makes everybody tick… and what pushes their buttons. Everyone gets their own custom reality bubble.
I use Kagi too! It’s great unless you’re looking for something local and commercial. Then I usually default to Google.
I recently joined Nebula. It’s a lot less content than YouTube but some cool stuff there. There’s Lindsay Ellis’ videos, and Philosophy Tube, and something called Abolish Everything! It’s not really going to replace YouTube for us, but it’s worth checking out.
I’ve never encountered many problems with YouTube because I guess I pick subjects that are hard to radicalize, like stand-up comedy, cross-stich, and video essays about movies.
Glad to see it didn’t stay up overly long. I don’t know the details or payout schedule for YouTube monetization, so I wonder if the channel received any of the ad revenue from the time it was up or if all just stays in YouTube’s coffers.
There’s an interesting tension between the worry about information/outrage bubbles, a current and worsening problem with content, and a desire for a curated content bubble as a possible solution. My thoughts have drifted that way recently as well, just because it’s hard if not impossible for an individual to vet all of the information they’re exposed to. At the same time if would seem to exacerbate the problem of a lack of a shared “reality” among the populace at large.
Philosophically, I might argue that a true “shared reality” was always impossible, given our social structures and biology. Our brains aren’t really adapted for meticulous fact-finding and will take knowledge shortcuts wherever it can, and “trust the experts” has always been a part of our cultures, whether that’s kings and priests or scientists that align with your particular values. All knowledge is tribal.
Algorithmic content just amplifies that tendency; curation by humans you trust at least gives you back a little bit of say in who to trust, at least.
Even that brief period in the 90s and early 2000s when we naively thought the Internet would connect and magically enlighten everyone… that was really only just a small set of relatively well-off, well-educated, primarily Western, post-Enlightenment users. That sort of rigorous, empiricist scientific worldview only really applies to a tiny, tiny fraction of our species. Once it hit the rest of the world, truth-by-decree (or religion, or government, or whatever) truth bubbles immediately formed and locked people in long before AI slop was a thing.
[His most lucrative AI slop channel] is a “Boring History” channel built around six-hour “history to sleep to” documentaries, narrated by what sounds like a languid David Attenborough.
A languid David Attenborough? That sounds fantastic! Subscribed! AI slop would actually make for fantastic lullabies…