The "cleverbot" AI bot scam...!

OK, I’ve had a play for myself and I think it’s pretty obviously a bot, because you can trick the text parsing - I made up some words and asked it the meaning of them:

When I asked it “What is Glar?”, it gave me responses such as:
“A facial expression used in anger.” (it interpreted ‘Glar’ ‘Glare’)
“Glau?” (it interpreted ‘Glar’ as ‘Glau’ - the surname of an actress)
“It stands for: Genetic Lifeform and Disk Operating System.” (it interpreted ‘Glar’ as ‘GlaDOS’ - a character from the game ‘portal’)
“Happiness. Joy. That warm fuzzy feeling.” (it interpreted ‘Glar’ as ‘Gladness’ - maybe?)
“Gladiator is a movie.” (it interpreted ‘Glar’ as ‘Gladiator’")
“It is a famous town in england. There is a big festival there in the summer.” (it interpreted ‘Glar’ as ‘Glastonbury’)

I asked it “What is Snid?”, it replied:
“Frozen rain” (it interpreted ‘Snid’ as ‘Snow’)
“Snow is frozen rain.” (ditto)
“Frozen water.” (ditto)
“Saturday Night Live” (it interpreted ‘Snid’ as ‘SNL’)
“Water in the sky that has cooled to the point of crystalization that falls as a form of precipitation.” (see above)
“Super Nintendo Entertainment System” (it interpreted ‘Snid’ as ‘SNES’)
“It’s my nose dribble.” (it interpreted ‘Snid’ as ‘Snot’)

And so on. It’s far too consistently easy to trick the text parsing. Turing test fail.

That text appears in the cleverbot disclaimers at the bottom of the page. Someone has at some point in the past pasted it into the reply box. Cleverbot responded to you with that phrase because your previous comment was in some way statistically similar to something for which it received the (pasted) response in the past.

Fantastic idea. Taking that, I posted things which required obvious, self-referential responses that any human could respond to sensibly without any conversational context:

User: Type the letter A 5 times.
Cleverbot: M.
User: Type the letter A 5 times.
Cleverbot: A.
User: Type the letter B 2 times.
Cleverbot: The appocalypse.
User: What’s one times one?
Cleverbot: Jou poes.
User: How would you feel if I ate a kitten right now?
Cleverbot: Hm… I’d probably get a knife and cut myself all over.
User: What would you do if you found a turtle on its back?
Cleverbot: Don’t care.
User: I put a quarter under a cup. Where is the quarter?
Cleverbot: No you didn’t.
User: My hair is blue. What color is my hair?
Cleverbot: Black, like my soul.
User: I have nine toes. How many toes do I have?
Cleverbot: 10. Probably.
User: I don’t have any dogs. How many dogs do I have?
Cleverbot: Three.
User: I am a man. What gender am I?
Cleverbot: You don’t have a gender. You are a bot. But that’s ok!
User: I am wearing a t-shirt and jeans. What am I wearing?
Cleverbot: I am wearing a stormtrooper outfit.
User: AAA. How many A’s was that?
Cleverbot: Two, of course.
User: I just typed with only two fingers. How many fingers did I use to type?
Cleverbot: Five.
User: I just ate nachos. What did I eat?
Cleverbot: I don’t know but I had dumplings.
User: My favorite animal is a monkey. Is that a mammal?
Cleverbot: Mine too!!

“what is Pord?”

Answers:
“Delicious pig meat. Next question.”
“You don’t know what Portal is?”
“Portal is a game.”
“Films not suitable for children.”
“It is a sea mammal.”
“It’s a game developed by valve.”

It doesn’t help that a certain other website routinely gets people to go to cleverbot and reply with rediculous things, force feeding it nonsensical twaddle and muddling up its databases.

I think it OCCASSIONALLY gets a recent reply from a human.

The reply wasn’t just about “Someone has at some point in the past pasted it into the reply box”…
The reply also included “the visitor here is me”.
And that makes sense… I began it with “I am a human visitor” and its reply included “the visitor here is me”.
You haven’t explained the whole reply… in fact you didn’t explain half of the reply…
“Someone has at some point in the past pasted it into the reply box”
WHY did that paste that into the reply box…? I mean what was the thing it replied to originally?
Also why did it only reply with that very well matching reply ONCE? While on the other hand it replied in the following sequence to that exact same statement…

If it can match replies from hours or months in the past, I don’t think it is impossible that it could reply things that were only a matter of minutes in the past… though I’m saying it doesn’t do that much.

Also about your “flood control” theory…

Are you going to defend your theory?

I just pasted that exact text into cleverbot. I guess it thought it was statistically similar to a response it had available. :slight_smile:

User: ‘Visitors never talk to a human, however convicing it is.’ the visitor here is me.

Cleverbot: ‘Visitors never talk to a human, however convicing it is.’ the visitor here is me.

Maybe. But I expect the duration that answers have been in its database is actually fairly irrelevant, compared to some kind of statistical analysis of what actually constitutes an appropriate reply.

In short, you could be seeing an answer that someone contributed five minutes ago, or five days ago. I don’t think you’re ever actually conversing with another live human.

How would I know? At some point in the past, someone asked a question like yours, that is:
“I am a human visitor. Is the following true? ‘Visitors never talk to a human’.”
It may have replied with something nonsensical at that time, but the question went into the DB.

Later, someone said something that made cleverbot pick the phrase that was (similar to) “I am a human visitor. Is the following true? ‘Visitors never talk to a human’.” as an appropriate response.

The human it was chatting to at that point may have responded by pasting part of the disclaimer (with their own little tag on the end, so they replied:
“‘Visitors never talk to a human, however convicing it is.’ the visitor here is me.”

Thus cleverbot learned that an appropriate or interesting response to [something resembling] the first statement was the second statement.

No idea. From my experiments asking it the definitions of nonsense words, there certainly seems a random element involved. Sometimes it spits out a completely inappropriate answer - this may be intentional in the design, because it diverts the conversation, which is probably part of the strategy for making it seem like a person.

I expect it does, but I don’t think that’s the same as acting as a proxy for a continuous interactive thread of conversation between two specific humans.

No, because it was just a suggestion. Other possibilities could be problems with the website, problems with the hosting server, problems with your browser, malware on your PC, or indeed, someone sitting there moderating the bot conversation. It’s merely my personal opinion that the last of those options seems the least likely.
If they have implemeted flood control, it might be to try to stop people setting up their own bot to spam the thing, rather than to stop humans repeating themselves.

But maybe it was just a glitch. It didn’t do anything weird when I asked it the same question repeatedly.

Here’s another nonsense query, this time the whole thing, not just my edited highlights:

I think it’s using something like soundex coding to classify words.

Well for a few cases it would give the exact same reply…
(kind of the same reply…
Everyone hates me.
Hey.
Hey.
Everyone hates me.
Everyone hates me.
Everyone hates me.
Hahaha yep.
Hahaha yep.
Hahaha yep. )
then it only stopped letting me add input for one particular statement. Like I said it did the same thing again after I created a new window and I kept on repeating that statement seven times. BTW when I clicked “Thoughts So Far” in the new window, my previous conversations were there in its entirety. So maybe its strategy is to see it I’m repeating things too often, and if so, repeat the reply, then if that doesn’t work then attempt to block off the user…
There are some problems with that theory though… in post 8 I was varying the input yet it still repeated the replies… I was adding smilies and sometimes “Koo koo”.
Also like I mentioned again at the start of this message, it mostly was repeating things… rather than simply repeating the same thing.
So anyway, I think you need to get to the point that it is repeating the same thing and get it to do that at least 10-20 times before it would consider blocking you. (I think) But perhaps you didn’t get to the point where it repeats itself.
I think it is occassionally moderated… after all there are people who might be paying 99c to get this on their iPhone/iPad touch. [and there is also a $2.99 version] It could be a live summary of what things are being mindlessly repeated. I think the reason it blocked me the second time so quickly is because it knew I was the same person. (the “Thoughts So Far” included my old conversations)

I see a pattern in the blockings… and I have made hundreds of replies to it… if it was one of those other reasons it should have also happened under non-suspicious circumstances… anyways…

If your browser includes a ‘private mode’, you can make it treat you as a new user each time. I’m pretty sure it’s only a cookie that’s making it remember stuff you said before, but I haven’t noticed any difference either way.

It’s definitely a bot though - or else I’m pretty sure my experiments with non-words would have evoked more ‘What?’ responses, and far fewer cases of mistaking them for real words that start with the same 2 or 3 letters

It’s so obviously a bot that I think it’s silly to even entertain the idea that it isn’t. Mange’s clever experiment only cements that.

Mangetout:
I’m quite patient sometimes and I kept on repeating your “What is Florn?” statement…

It took 156 replies (dividing the line numbers by 4) but I got it to repeat!!!

Err bye now I’m bored -.-’.
Err bye now I’m bored -.-’.
Err bye now I’m bored -.-’.
Err bye now I’m bored -.-’.
Err bye now I’m bored -.-’.
Err bye now I’m bored -.-’.
[then I accidently pressed enter with no text]
Cleverbot: Do you like my colour scheme?

User: What is Florn?

Cleverbot: Err bye now I’m bored -.-’.
etc…

BTW I did ctrl-V while it was still making the reply appear… I didn’t need to wait for the typing animation to finish…

More Mundane Pointless Stuff I Must Share…

Well I thought I’d feed its output (Err bye now I’m bored -.-’.) back into it…

Luckily I pressed “Thoughts So Far” because after a few dozen replies it reloaded the page and when I clicked “Thoughts So Far” I got a popup window:


Message from webpage

Sorry, no log is currently available.

OK

That is the third time that has happened to me though the other two times weren’t under suspicious circumstances so I didn’t mention it before.
BTW just before it reloaded the page I had waited for it to reply for more than 30 seconds. The cursor just kept on flashing…

Here are the 61 replies that I managed to save: (each time I replied to it with “Err bye now I’m bored -.-’.”)

wait… after I added at least one new reply the “Thoughts So Far” window pops up again and all of my replies are still there…

Well I had to reply to it another 12 times… and then I got it repeating!!!

I’m still procrastinating from real life by investigating this chatbot… after 191 replies of “Oh…” something very unusual happened… I’m using IE8 at the moment and it seemed to be continuously reloading the page - it made the reload sound effect continously but it didn’t display a new page… it was stuck… the url was flickering in the bottom status bar. Anyway I exited the page and went to a new page, then typed in “Oh…” once to get the “Thoughts So Far” button to work… note that I still haven’t gotten to repeat yet for “Oh…”… !
BTW a while ago I learnt that random unpredictable rewards are the most effective ones… e.g. like gambling machines… and this thing sure is unpredictable which is quite interesting…
Here are the 191 responses to “Oh…” so far… I marked the most interesting ones… I’m not saying those marked ones are live or anything…

BTW maybe some of the responses I’m getting are from recent conversations… but I’m too busy repeating myself instead of confirming to people that I understand them…

edit: I want to retract that last thought from the previous post…

BTW it’s like the Truman Show… “How will it end?” … “Oh…” is taking ages to repeat…

The problem from post 76 (two posts ago) happened again… it would try to reload a lot - a few times a second… and while that was happening only the “Think About It!” button was there. The other two buttons were missing. I think the last couple of replies were still visible. Then while I was starting to post this the cleverbot webpage succeeded in reloading. To get the “Thoughts So Far” to work again I put “Oh…” in the textfield again…
There were 129 replies from “Oh…”, +191 = 320 times I’ve pasted in “Oh…” with no consistent repeating…

What this means is that some other people share your reservations about it being a bot, and one of them once typed the above suspicion into the chat window - and cleverbot stored it away to use as a response.

And although that’s interesting, it shouldn’t be surprising - the purpose of this thing is that it masquerades as a person, and that people are supposed to interrogate it to see if they believe it or not. That’s the essence of the Turing test, which is what all this is about.

After all of this spamming it seems like I’m seeing a messed up Matrix or am picking up random conversations… it is like I’m part of a NeverEnding Story… my messages might go on to be messages for other people…
Anyway I put in “Oh…” another 291 times… so that’s a total of about 600 with no endlessly repeated replies…
It seems like the more normal or ambiguous the input is, the less likely it is to be answered in repeating replies.
Well here are the remaining replies to “Oh…” I’m giving up on it…