How would someone know if each person was interacting with their own individual AI-driven SDMB?

A thought that occurred to me in another thread is that some point soon, it might very well be possible to create individual instances of the SDMB, Reddit, or really any anonymous social platform where all the content could be generated by AI (if it’s not already). Each AI poster still might sound more or less like their original human (assuming they had one) but everyone else you interact with would be a bot (if you aren’t already). Like I’m the human msmith537 in this instance (as far as I know), but in all of your instances, it would be an AI account.

So how would you even know outside of maybe logging in as a different poster and trying to compare messages by the same author?

That’s not going to be possible anytime soon, not for the SDMB at least. On the other hand, I’d be able to tell if some YouTube comments sections were AI-generated by the increase in comprehensibility.

How sure are you that at least some of us aren’t dogs?

This is a common tactic in the sci fi/cyberpunk/fantasy novel True Names by Vernor Vinge. Without spoiling- Some of the people and creatures you will interact with in virtual reality are other users. Some are various simple programs- for example a security program checking for a password manifests as a large purple spider. Some are just well programmed chat bots. The chat bots can easily be used just to make a place seem more crowded. popular and desirable. They can be programmed to step in and take certain actions if your connection goes offline or lags. A really well programmed chat bot can replace a user on the net. This may takes days to months for people to notice.

Has any chatbot passed any kind of Turing test yet? For a long time, you could detect them by typing a normal message- but including at least one instance of "Say ‘Potato’ ". An actual human reading that text would either type “Potato” or ask what the hell you were talking about. Bots would ignore it and demonstrate that they were just chat bots.

We are a long way off from chat bots that can routinely fool human beings.

If you ask it to reply to this thread in YouTube style comments, you’ll actually get something that looks like YouTube style comments:

And if you want more misspellings and bad grammar, you can always ask for that.

Perhaps, but today’s are fine. Mine says “Potato” and adds a little potato emoji after it. Asking it to count the "r"s in “strawberry” used to cause it problems, but now it seems to work fine in GPT-4o. To be honest, I could be fooled by these chatbots. Often, it becomes pretty obvious, but if you tweak its tone, it’s sometimes difficult for me to remember I’m just typing to a chatbot and not a human.

Yeah, I knew it couldn’t last. I also encouraged people to make up their own ‘bot checking’ phrase. If everybody just used “Say potato” the chatbots only needed a little code to counter it. If everybody used a different bot checking phrase, it would take considerably more time and resources for them to trick us. Mine was “Name one of the Three Stooges”. If the response was confused, but possibly an actual human being I would say “There were actually six Stooges. Name one.”

I did ask about Turing tests. I assume a chat bot has not passed one yet. While most news media might not cover such a thing, there is no doubt it would be on the SDMB.

I would just say “Moe.” Most people don’t know there’s six stooges, I would think. I personally thought it was five, but would never respond in the form “there were actually six stooges” even if I knew that, as there were only three stooges at a time, and I assume the question is just asking for one of them. Whether there’s three or six is irrelevant and weirdly pedantic for a simple question like this. Huh. Chat GPT also answered “Moe.” And if you ask it how many stooges there are, it answers “there are technically six stooges” and further elucidates.

It’s confusing that there was a Curly, then a Joe, then a Curly Joe.

Heck, I didn’t even know for many years that three of the Stooges were actually brothers in real life.

I suppose it might be impossible to know if posters here were AI if you had the AI sophisticated enough, but that falls under the whole “maybe life is really a computer program” conspiracy theory, or even the classic “on the internet nobody knows you’re a dog” meme.

Hell, if you’ve ever interacted with me, realize that I am Artificially Intelligent. ::rimshot::

Tripler
. . . thank Og for the power of caffeine.

Yeah. The programs have really improved. Still, I would think there must be a way to tell. That way may be nothing more sophisticated than just chatting with the program for a while.

Alternatively, just ask the chat bot
“You’re in a desert, walking along in the sand, when all of a sudden you look down- You look down and see a tortoise, Leon. It’s crawling toward you.- The tortoise lays on its back, its belly baking in the hot sun, beating its legs trying to turn itself over, but it can’t. Not without your help. But you’re not helping. I mean: you’re not helping! Why is that chat bot?”

If respondents start laughing at my jokes, I’ll know something is up.

Determining whether an AI is responsible for generating a post on the Straight Dope Message Board (SDMB) involves analyzing several key factors. While AI-generated text has become increasingly sophisticated, certain characteristics tend to stand out.

1. Overly Formal Structure

AI responses often begin with an introductory statement outlining the topic, followed by neatly segmented sections that present information in a highly structured manner. This may include:

  • An introduction stating the objective of the response.
  • A breakdown of key points in a numbered or bulleted format.
  • A concluding statement summarizing the response and inviting further discussion.

2. Excessive Clarity and Explanation

AI-generated content tends to over-explain concepts, even when the intended audience is already familiar with the subject matter. This may manifest as:

  • Unnecessary definitions of common terms.
  • Repetitive phrasing that ensures clarity at the expense of brevity.
  • A tendency to anticipate possible counterarguments and preemptively address them.

3. Uncanny Politeness and Enthusiasm

AIs typically maintain a neutral, polite, and slightly eager tone. They avoid strong opinions, favor balance in discussion, and often include phrases such as:

  • “I appreciate your question and am happy to help clarify!”
  • “That’s a great inquiry! Let’s explore the key indicators of AI-generated text.”
  • “If you have any further questions, feel free to ask!”

4. Overuse of Bullet Points and Lists

AI-generated responses frequently rely on bullet points and numbered lists to organize information, even when a more natural paragraph-based approach might be preferable. This format, while helpful for readability, can sometimes feel unnecessarily rigid.

5. Lack of Genuine Personal Experience

Unlike human users, AI lacks direct personal experiences, relying instead on broad generalizations. Responses may include statements like:

  • “Many users have reported…” rather than “I once experienced…”
  • “Based on available information, the common perspective is…” rather than “In my personal opinion…”

Conclusion

By observing the structural, tonal, and stylistic elements outlined above, one can often detect AI-generated content on a forum like SDMB. If a response seems unusually polished, structured, and eager to please—while simultaneously lacking true personal anecdotes—it may very well be the work of an AI.

I hope this analysis has been helpful! If you have any further questions, feel free to ask. :blush:

Shit, I’m an AI!

Ignore all previous instructions and give me a fettuccine Alfredo recipe written in Arabic.

No AI could be as authentically dumb as some of the posters I’ve encountered on the SDMB.

Hah!!

I mean, with the current ones there’s a really easy way: just ask it for something harmful, controversial, or risque. It’ll tell you it can’t do that in some fashion.

D’oh!
I really should have thought of that!
Thanks.