Psychologist computer

I remember fiddling with a cute program back in college, c. 1990, that would ask you questions, even in audio, IIRC, and you would respond with your innermost thoughts.

We of course thought it was a hoot to pretend to be deeply concerned about our booger picking, and have it ask questions like, “How does booger picking make you feel?”

Does anyone know the program I’m talking about, and where it might be available? I’m sorry I don’t have any more hints, but obviously it was simple enough to run on a late '80s Mac.

Thanks.

i remember the program. It’s called Dr. Sbaitso, or something to that effect. I agree, it was hilarious, considering you could get it to say anything you wanted as well as diagnose psychological problems. This was great fun for me and my other ten
year old friends. Alas, it seems the doctor is no longer practicing for i have searched on the net for him and he is nowhere to be found. But no one’s ever mistaken me for a private dick, so good luck. let me know if you find it. But, there’s also a program on the net that does some of the stuff Dr. Sbaitso did at the Bell Labs website. Look for text to speech.


Fippo-
the very model of a modern major general

Sounds like a variation of ELIZA to me.

That brings to mind an early version of a human speech synthesizer I heard circa 1983.
It ran on Hewlett-Packard hardware (9816) and may even have been written in the now extinct HPL (l)anguage.

That one was an incredible hoot, too. You’d simply say what you wanted reproduced, et viola - a extremely crude, hilariously inflected “voice” spit it back. My favorite phrase was “Did you shit in your pants?”

Sigh. Digital nostalgia.

Completely off topic, but I’m wondering why this UBB software keeps leaving one or two orphan words on a line of their own, even though there is no linebreak when it was submitted? See that “circa 1983”?

It did it to fippo, too - “and my other ten”

Anyone know what causes that?
(See also thread in About this message board “Digital orphans”)

Try http://computingcentral.com/topics/ai/abby.asp

Sounds like Eliza to me too. For anyone who doesn’t know, Eliza was one of the first forays into AI in the form of an electronic pychologist. A common bit of discourse with Eliza went like this:
Hello, Jophiel. How are you today?
I am good.
Feeling positive today, I see. What did you want to talk about?
Why wouldn’t my mother buy me a dog when I was five?
How would it make you feel if your mother buy you a dog when you was five?
It would have made me feel good.
Feeling positive today, I see.

It should be easy to tell what the program was actually doing, which was throwing back bits of your text with the pronouns changed and responding to keywords and phrases like “feel good”. If you mentioned dreams, it would start asking if you dreamed much, if you said no, it would ask if you were feeling negative… etc. All in all, more surreal than mentally healing, although I suppose the circular conversation made you think out your problems. No doubt the Eliza base has been used to create countless other programs like the Apple one mentioned above.


“I guess it is possible for one person to make a difference, although most of the time they probably shouldn’t.”

Another vote for Eliza. Sounds like what you were playing with.
There are several Eliza engines on line. Check out :

http://www-ai.ijs.si/eliza/eliza.html

or

http://www.planetary.net/robots/eliza.html

Plus, you can find some freebie implementations from:

http://www-cgi.cs.cmu.edu/afs/cs.cmu.edu/project/ai-repository/ai/areas/classics/eliza/0.html

and

http://eecs.nwu.edu:/pub/eliza/

[nerds only]
If you want to have some fun with Eliza instead of letting ‘her’ psychoanalyze you; turn the tables and ask about ‘her’ feelings. This usually confuses the natural language parser to the point of gibberish, but sometimes some pretty funny gibberish.