Persuasion is tough and shifting beliefs about conspiracy theories is thought to be particularly difficult, as they fulfill an underlying need to explain or control our environment. Or so we thought. Gifted article:
A sample of 2000 adults was gathered and asked to interact with a chatbot, which was tuned to have expertise regarding conspiracy theories. Eight minutes of conversation lowered their beliefs on a scale of 0-100 by about 20 points. More encouragingly, surveys taken 2 months later the decline was sustained.
Holy crap. I wouldn’t say this solves the problem. And a random sample of adults will certainly hold some conspiracy styled beliefs but they will typically be less invested in those beliefs. Still.
What’s the secret sauce? It doesn’t appear to be politeness: it seems to be customized information. Not mentioned in the article is the possibility that information goes down easier when it’s delivered by a non-human, because while people generally don’t like having their silly beliefs corrected, a chatbot might be perceived as non-threatening. My hypothesis.
Then again, one researcher noted that perceptions of AI are still being formed, and these experiments might not replicate well in another 10 years.
Those wanting to chat with DebunkBot can do so here. It says on the website: " Before starting, you must answer some questions about what you believe and why. The AI uses this information to begin the conversation."
I’m trying to figure out what sorts of conspiracies I believe in. A good candidate is here: Gerald Cotton may have faked his death.
I wasn’t able to get Debunkbot to provide any information about faking deaths in general or Gerald Cotton in particular. I don’t really consider it much of a conspiracy theory, but you go with what you have.
I guess another example might be the CIA’s MKUltra program, the existence of which is well documented. I’m not sure there’s anything to debunk though.
Hmm, I’m of two minds here. On the one hand, a tool that can effectively combat disinformation and offer counters to conspiracy narratives that seem to meaningfully swing the needle of people’s beliefs seems very welcome, especially in current times. On the other, if beliefs are that susceptible to alteration by interacting with an AI, who’s to say that there should be factual information driving the chatbot?
I’d be very interested to see the opposite experiment—how easily could a chatbot convince people of believing in conspiracy theories? If the success here is comparable, that’d seem very worrying indeed—after all, the truth is narrow, while potential bullshit is nearly limitless.
I tried it with a belief that police use agents provocateurs to start violence or destruction at otherwise peaceful protests, giving them cause to break up the protest.
I was pretty certain it happens. The bot agreed that it sometimes has happened, but persuaded me that it’s pretty rare and is probably fairly hard to pull off without detection these days.
Right. And this is why, although this is an interesting experiment, a chatbot isn’t ever going to solve the problem of disinformation. What is required is a foundational level of trust in bedrock institutions, notably the government and regulated media. If there’s no one you feel you can trust, paranoia naturally runs rampant.
It’s closely analogous to the old adage that business cannot function without a fundamental level of trust. Of course there are contracts, and legal actions against transgressors, but ultimately the assumption has to be that your business partner will do what they’ve promised, that buyers will pay for what they buy, and that contracts will be upheld fairly if they have to be challenged in the courts.
The difference between the US and most civilized democracies is that a fundamental distrust of government, aversion to media regulation, and an unhealthy absolutist position on “free speech” at all costs, has created fertile ground for insane conspiracy theories and massive disinformation. Which has been endlessly amplified by the power of social media. For those who may wonder why American politics is so crazy, wonder no more.
How good are these chatbots? Remember, Musk wanted Grok to be “anti-woke,” but there wasn’t enough data to make it so. Just giving it normal data, it wound up “woke” by his description.
In other words, I’m not sure how easy it would be to find good training data for a bot that pushes conspiracy theories.
My first concern was not deliberate misinformation, but the fact that LLMs hallucinate. And modern chatbot seem to be LLMs.
I tried my belief that United Flight 93 was brought down by the government. It was an interesting conversation. I can see how the bot could be useful in dissuading people from their beliefs, or at least opening their minds slightly. It’s very supportive and non judgemental.
In my specific case, it really played up the bravery of the passengers in their attempt. It acknowledged where my arguments had a basis on truth, so you don’t feel like it’s an argument. For example, it acknowledged that Cheney had given orders that would have allowed the plane to be shot down, but then said it’s unlikely it could have happened so fast in all the confusion.
In the end, I didn’t really change my confidence in this. At the beginning, it asked how important this belief is to my worldview on a scale of 0-10, and I said 1. I wonder if I was more invested in it, would I be more likely to have my views changed? Common sense says no, but maybe because I don’t care enough in the first place, I was less likely to deeply consider what it was saying.
No a chatbot isn’t going to solve the problem of disinformation. I was surprised though that it moved the needle. One thing I’ve learned on this message board is that persuasion is hard. (Or maybe I just thought I learned that.)
I suspect it’s difficult to gather useful tips from these chatbots though. Non-judgmental language can come off as condescending from a human. Maybe it can work if combined with flattery and amiability. Not sure. We have solid diplomatic tools for defusing conflicting opinions, eg “There’s no accounting for taste.” Persuasion is thought to be a step by step process at best.
I do not think police would do this- remember- two can keep a secret if one is dead. Sooner or later, someone would spill the beans. However, it is known that Anarchists and Right wing agitators have done that to otherwise mostly peaceful protests.
This is the issue with conspiracy theories. Sooner or later, the truth will out if more than a couple of people know about it. Even if only two- look at the Lee Atwater tapes- Atwater and his interviewer were both dead- so the secret was safe? Nope, the interviewers wife found the tapes and sold them. Deep Throat for Watergate, and so on.
Somebody will have a change of heart or get mad and seek revenge or need the money and sell out or make a deathbed confession- or leave behind records after death.
Look at the Moon landing- how many people would have to know about the fake? Even the Russians would know, not to mention other nations telescope and telemetry observations. Impossible to keep a secret that big.
Well, there are documented cases of it. What the bot debunked was how often it occurs, not that it happens at all. The bot cited multiple instances of it happening, and described how it came to light, and suggested that with modern smartphones, etc. it would be harder to get away with, so it’s probably pretty rare. But there are multiple instances in the last 25 years where it’s been revealed. So, yes, police would do this.
ETA: This may be a good example of why the bot is more effective.
See, the cell phone thing would actually make me more likely to argue, as I can think of several ways that a cell phone camera would not help.
Of course I don’t believe it is all that common, either. Not enough to assume it happened in any specific riot without specific evidence. Standard conspiracy rules about the inability of groups to keep a secret apply.
But just because it would be dumb and they’d have a high risk of being caught never means it never happened. It just means it’s unlikely to have happened without them being caught.
That’s the important thing those who believe strongly in multiple conspiracy theories tend to miss. Not that it is impossible, but just how unlikely they could get away with it.
Most CTists are not so far gone that they think there’s any point in arguing with a machine so they’ll be exposed to facts and reason with no incentive to play their game of ‘what about…’. Lacking the satisfaction that comes from frustrating skeptics they sometimes realize how foolish they sound.
There’s a very simple reason why police don’t do this. They don’t rely on secret agents provacateur to start violence, because they just openly do it themselves.