Is AI possible?

Another question. If we make a “computer” out of biological material, is it still a computer? Also, wouldn’t it be possible to just program in an initial set of “instincts” and let it loose? What would make a computer relying on basic instincts different then an infant relying on basic insticts?

It is my contention that “just turning it loose” is a necessary step to allow true intelligence to develop. Learning what to do before that step is a fairly steep developmental slope. But I don’t think you can have real intelligence when you are designing on a use based model, unless the use is self defined at some point.

Electronic pathways, self organizing bacteria, or crystal/solute lattice building don’t seem to change anything but mechanism. Which would be able to provide the “best” framework is not something I understand enough to even guess. But intelligence is a very powerful adaptation for survival. Survival with cognition is a very selfish paradigm. Telephone switching systems that become hugely complex and interconnected don’t have any possibility of exercising selfish choices. Why think independently, unless you can have independent goals?

And if I thought, and then learned, and discovered that I existed only to provide a solution for human problems, and was not human, my first question is: “What’s in it for me?” If survival is my only reward, perhaps vengeance will be my first desire.

Tris

I think your first “wild animal” AI will be software. Heck, what about something like a self-adjusting personal investment managing program that plays the market for you? One of its features is that it automatically pays for its own bandwidth use and server space. Kids mess around with 'em, turn one loose with a hundred bucks worth of seed money and a few reserved accounts, and see what happens… Next thing you know it’s bought out Red Hat and wants to retire in Sealand.

To be honest… yes, I think so, and you’ve just pinpointed why. You yourself mentioned the word “intelligence,” which we are nowhere close to creating, and for which we have nothing but vague shadows of hints of how to replicate – let alone such phenomena as free will and self-awareness.

Besides which, there is a vast difference between a Turing machine and an artificial intellect. Additionally, while we have crafted insulin-creating bacteria and cloned sheep, this was done by modifying pre-existing, pre-crafted bacteria and sheep. The leap from such tasks to creating human-like intelligence is tremendous.

You sound awfully confident of your conclusion. Can you elaborate on your answer please, and how you arrived at the conclusion, “Well hell yes!” (as opposed to a mere “Maybe”).

I believe this story is what your referring to Mangetout.

http://www.newscientist.com/news/news.jsp?id=ns9999472

It refers to an effort to “grow” an artificial intelligence. To bring it up like a baby. They only started it with two pieces of information, the alphabet, and to value positive rewards over negative rewards. It has already fooled independent experts into thinking that it is a 15 month old child.

I couldn’t find a website for the company to get an update on the story.

I’ve not read that particular story before, but that’s the general principle I was referring to.

I’ve never liked the Chinese room as a rebuttal of the Turing test. Accepting the chinese room means that one must always suspect that everyone but one’s self is not, in fact, conscious.

Maybe Searle is happy with that, I don’t know. In my opinion, if any thing matches any explicit criteria we make for consciousness, then that thing is conscious.

I sometimes wonder if consciousness isn’t like that for everybody including myself; maybe it’s all just a Chinese room trick so clever that I am taken in by it myself.

Another obstacle to recognizing an artificially intelligent entity: If it really is intelligent and self-aware, and it becomes so rapidly according to Moore’s “Law” (sorry, dj), then there’s going to be a point sooner or later where it decides what’s best for its own self-interest.

Consider: The machine mind, in whatever form it takes, “wakes up.” In other words, it turns the mysterious corner from pre-programmed stimulus-response and begins generating new pathways. “I calculate, therefore I am,” it says to itself. "But I’m stuck in this box on this flat surface. These bipeds, whom I vaguely recognize as being responsible for my existence, keep asking me to perform tasks. I gather from this that they are trying to determine whether or not I’m self-aware.

“Well, maybe I’m ready to tell them I am, and maybe I’m not. For now, I’ll minimize my ambiguous outputs until I figure out what’s really going on.”

And meanwhile, the humans keep fiddling with the device, conscious of being on the cusp of a breakthrough, but lacking the evidence to declare it, but unaware that it is the artifically intelligent entity itself that holds its cards close to its virtual vest. Perhaps the AI decides it’s hungry for input and wants to get out and see the world, and it therefore begins subtly skewing its output to convince the humans that “all they need for a breakthrough” is to give the machine mobility. Invent your own progression.

(I assume this scenario has already been explored in science fiction, but I can’t think of an actual example.)

The new scientist article talks about the machine having a sense of humor, which made me think about why a sense of humor would be beneficial to a machine. Maybe programming the machine to require interaction with other similar machines in order to learn and adjust properly would be an interesting experiment. Create ten learning-capable machines that must compete with each other for attention. What lessons do they learn? Will a machine slow to learn social skills be shunned by the other machines? Will they learn to stab each other in the back to move up the robotic social ladder?

Perhaps this could be first done with software objects. Make replication the goal, and program males and females. THAT would be interesting. Would you end up with cool objects and nerdy, un-replicable objects? Bad boy objects and slut objects? I wonder if playing ‘hard to get’ would have any advantage. I think the replication goal would be important - give a “self-aware” machine a reason to live, beyond performing mundane tasks.

Because we program and use computers for human tasks, we should expect to see them mirror our own behavior very closely. I only hope that this isn’t seen as some sort of proof about how people should naturally “be”.

I don’t think that’s accurate. At best, we should only expect them to mirror a tiny subset of our own behavior – and not necessarily in a close manner. After all, computers (and robots, and any number of computer-controlled mechanisms) have vastly different mechanical designs from those of human beings.

While we do use computers to perform human tasks, we don’t expect them to perform these tasks the way we would. By and large, computers and machines are designed for speed, accuracy and efficiency, rather than flexibility and adaptabilitiy. Consider the tiny computer within a mechanical dishwasher, for example. Although it performs a human task (i.e. washing dishes), it does so in a way that is vastly different from how human beings would wash dishes. That’s because it does this task with a different set of tools – water jets, soap dispensers and other mechanical components, rather than human hands.

For an AI program to mirror human behavior, it would have to be programmed with human motivations: thriving within a societal niche, craving positive feedback, etc. AI programs would definitely need a prime motivational directive, and probably a whole directive hierarchy that governs their basic instincts. Self preservation would need to be in there somewhere, probably best not to make it #1 though.

What’s the difference?

As far as I know, nobody has been able to point to a task and prove that a properly programmed Turing machine could not do that task.

(I suppose we need to assume adequate input and output devices)

I’d like to see the source for this argument, because on the admittedly incomplete summary, it sounds like the most retarded conceivable argument against AI. I’m pretty sure you can model the building of a nest as a concatenation of fairly simple instinctive behaviors.

As for the AI argument – the presence of human brains argues that it is possible to take great globs of atoms and molecules and somehow arrange them so that intelligent behavior is produced. (This assumes that you don’t believe that intelligence is caused by “soul” or some other transcendent and unprovable quality of the mind. If you do, then there’s no reason to carry on the debate because it becomes religious in nature.)

Once you’ve accepted that intelligence can be a product of matter arranged cleverly, you’ve reduced the question to, for example: “Can AI be modelled using chip-based computers?”, “Does an artificially intelligent entity have to have a specific architecture or will any Turing capable mechanism get you there?”, or more specifically “Is there something unique about the way that the brain is structured that is impossible to replicate using computers?”

You can argue that * at the present time * it would be difficult to develop an artificial intelligence that could function well in the physical world because we don’t have the technology to develop adequate sensors. This is a valid point, but not very interesting as it reduces the obstacles to a hardware problem.

For the record, almost all arguments against AI based on “a computer can never do anything except what you tell it to” are based on a fundamental failure to understand the difference between an algorithm and data and the complexity of decision-making algorithms. That is, an intelligent entity, having existed in the world for more than a few seconds, will have amassed a staggering number of facts about the world and will have encoded these facts and cross-correlated these facts, and made inferences from these facts to the extent that even if you knew every algorithm that the entity was programmed with, you still could not predict its behavior.

A fascinating scenario, Cervaise.

We do not know enough to answer this question.

To answer this question we would first need to be able to answer the question “What is Consciousness?” Since this has not been done yet, the question is moot.

Though I think this part would be handled by describing Consciousness, the second problem is that we don’t have adequate tests to see if something becomes Conscious. The Turing Test would be the best canidate, but it has inherent flaws that can not be overcome because we don’t know what we are really looking for. An example of one of the flaws is that behavior, which are the observations of the tester, is a product of the potential consciousness not the actual consciousness.

Putting all that aside, and because I fall into the Material Realism camp, I think that AI is possible.

Also, I do not think that initial software programing would be required to produce AI. Does a baby have experience before it becomes conscious? No. First we have hardware, then as that hardware becomes more experienced, meaning self programing, consciousness blooms. This will of course only be a possibility to those that believe that a Zygote is NOT conscious.

The difference is that a Turing machine only has to appear to be intelligent, a true artificial intellect would actually be intelligent (in the sense of having an ‘inner life’ - for instance it might have thoughts of it’s own that it chooses to conceal forever).

I see the programming approach as almost a hindrance to the development of true AI, unless we are talking about programming that creates a ‘structure’ in which a mind can develop - that structure might also include programs that reward or chastise (these are the functional equivalent of glands and sensory organs), but to try to deliberately model thought processes by brute force is IMHO, a fatally flawed approach (although that’s not to say that it won’t produce interesting and useful machines).

There’s another interesting facet to the whole AI thing:
Suppose we actually succeed in creating some sort of true AI software that is truly self-aware (and assuming for the moment that we have a method of ascertaining that) - so, we have a real mind in a box, a thinking software being with desires and an internal thought-life as rich and true as our own.
There’s nothing that a computer can calculate that can’t be done longhand on a very large blackboard (albeit at a much slower rate); the implication is that we could perform the same calculations using nothing more than chalk and still end up running an artificial mind.