I read this OP, about wasting an email scammer’s time by responding to them and playing through the scenario. And it made me think of this RAND report about how to fight Russian propaganda, in essence, to match kind with kind.
Scammers rely on the scale to be profitable. They need to be able to flood the internet with enough scam emails in order to find the 0.1% stupid enough to fall for it and actually go through with the scheme.
Right now, this is very easy for them. They send out 10,000 emails and they get back 5 responses. 4 of those responses will be real and 1 will be someone wasting the scammer’s time. The scammer’s time involvement is sufficiently low - writing an email every few days - that this doesn’t really register. I doubt they even care.
But what happens if, in response to their 10,000 emails that they send out, they get back 10,000 answers, each of which looks like a real response from a real human, and the scammer has no way of knowing which one is a real person? He’ll know that 4 are real, but he won’t know which ones.
The scammer has to device a response that he can blast out to all 10,000 of his responders. He can’t hand-write each one. He has to maintain a conversation with the mystery four people, without being able to identify them, and that’s a lot harder. The four will try to ask questions, try to hold a conversation, and they’ll just get back impersonal demands to send money. The number of people that the scam will successfully work on drops from 0.1% of the population to 0.01% of the population, because scamming someone requires building up a relationship and you can’t do that via cut-and-paste responses.
Resolved: The Federal government should develop a program to develop AI / chatbot technology to detect and respond to scammers in bulk.