Let's talk about Artificial Intelligence

I am very, very interested in the topic. I have several thoughts which I’m going to post here to get it started.

As a disclaimer, I’d like to freely admit that what I know about it is neither up-to-date, nor remotely comprehensive. It may be that all of the things I’m wondering about aren’t even relevant. All the same, I’d love to hear what you know, and what you think.

Okay:

Are artificial intelligence and artificial personality two separate things?

Can we have one without the other?

Can we grow a genuine artificial intellect without an analogue to the endocrine system?

Regarding “Hal,” the computer that competed on “Jeopardy” – can it be said to have “thought” about its answers in a way like the way we think? Or did it simply compare keywords, looking for statistically significant relationships between them?

Can we tease apart thoughts and feelings enough to reverse-engineer thinking, setting aside the contribution of feelings/ intuition/ sentiment/ life experience/ etc.?

(Some) savants, autistics and others with flat affect can think, speculate, and conclude in a regular way. Is the acquisition of knowledge still stimulating their pleasure center? Are they learning in the same way that others learn?

Is it possible, should we succeed in creating an artificial intelligence, that something analogous to feelings might eventually emerge? (For instance, an AI observes a situation rampant with textbook irony. Could the AI recognize the irony in the situation? Would that be the same as “feeling” it?)

Here’s more: reading this over, I can see that I have an obvious assumption about learning, which is that there must be an incentive involved. Perhaps a directive to complete a given task amounts to the same thing as human incentive. Do we learn because we want to? Does a machine learn because we have told it that it “wants” to? Help me figure this out.

So many jokes spring to mind in this election year, but I’m going to resist the urge.

That’s Watson, unless there’s been another Jeopardy-complete AI I haven’t heard of.

And there hasn’t. I would have heard of it.

Also, autistic people don’t always have flat affect. They can, in fact, have emotional tantrums, and stereotypically become enraged over things other people don’t consider important at all. What autistic people seem to have problems with is sensory filtering and integration, which is also a problem for artificial intelligence: How do you train a computer to ignore the unimportant stuff when you yourself don’t have a perfect definition for ‘unimportant’?

First of all, the Jeopardy computer was Watson. HAL was the ship’s computer in 2001, and had plenty of personality.

But really, you have to ask yourself what you mean by questions like “can it be said to have “thought” about its answers in a way like the way we think? Or did it simply compare keywords, looking for statistically significant relationships between them?”. Because one could argue that looking for statistically significant relationships between concepts is exactly what we do when we think about things. Now, granted, Watson certainly uses different methods than we do to do that, but it certainly has something, and if we don’t call it “intelligence”, then we start to run out of words to describe just what it is.

Aw, crap! Watson! I knew that! I have been thinking about my “Ultra Hal Personal Assistant” trialware all day, musta been one of them Freudian Slips.

Anyway,

I thought this same thing myself, as I wrote the question. Let me see if I can decompile it –

When I watch Jeopardy, and they say, f’rinstance, “This term for a long-handled gardening tool can also mean an immoral pleasure seeker,” I know the definition of “rake” (or “hoe”), and if I’m quick enough, I can cross-reference the pun and get the answer. Presumably Watson does the same thing.

But what about when the answer is more arcane? … hmm, wait a minute. The more I think about it, Chronos, the more I feel that you are right. Even if Watson (or another AI) doesn’t “know” the answer in the same way that I would, half the time I would be making an educated guess, and that’s just what he’s doing. It’s mostly the riddle format of Jeopardy itself that makes it seem ambiguous to me.

–Especially when the really cool thing would be to file that “unimportant” stuff away for a different, potentially relevant situation further down the road, as human infants do.

This is double true when we are talking about trivia games. At least for me it is. If I’m playing trivia and the aswer is multiple choice; If I’m not a 100% sure, I try to connect key words in the question to key words in the answers.

I think the real question here is: What would make an AI entity significantly different from us?

For me, humans have the ability to truly act randomly. An AI would not.

As an example, I offer you this hypothetical:

Take Bill Murry’s Ground Hog Day. Only except in this scenario, every morning Bill wakes up; he’s unaware that he’s reliving the same day.

On day one Bill may want Pizza for lunch.
On day two Bill may decide he wants a ham sandwich.

Throw the AI guy in the same situation; it will always go for the pizza.

This just comes down to the question of free will vs. a strictly causal universe. You’re asserting that a person would do something different given the same initial state and environmental inputs, while a machine would not, but there’s no evidence that that’s the case.

I’ll assert the exact opposite. If the physical state of my entire body and brain were magically set back to the state from a previous point in time, along with the rest of the universe that I interact with, I would make exactly the same decisions and take exactly the same actions. The complicated bio-chemical machinery that is a person isn’t fundamentally different than the metal and semi-conductor machinery that makes up an AI.

How is that true? Chaotic behavior is simulated trivially (and indeed can be present for example in neural nets).