Is the Internet going to "come Alive" soon?

Well, as opposed to Harry Houdini, I’m still here.

They need to go to the author’s house and take season 5 of the X-Files away from him before he hurts himself. God only know what he’ll come up with when he gets to the “Post-Modern Prometheus” episode.

If you take “exceeding one’s programming” to be the benchmark for sentience, then you’ve defined away the possibility of any computer being sentient. And please note that by “any computer”, I’m including wetware brains. If a computer is so elaborately programmed that it can learn how to write the Great American Novel, it’s still working within the constraints of its programming, since it wrote from what it learned, and its capability for learning came from its programming.

In the strict theoretical sense, what is mean’t by “exceeding one’s programming” is that the strict seperation enforced on Von Neumann machines is broken. Typically, normal computer programs are split into data and program space. A typical program cannot write in program space which means that the the program remains unchanged over the course of it’s execution. A program that could alter it’s executable code on the fly would “exceed one’s programming”. Strangers point is even more profound, the human brain must alter it’s execution by the very nature of cognition. We currently know of no way of programming that requires self-modification to run.

Actually, what I said wasn’t quite true, we know of several forms of programming that could be thought of as self-modification. However, usually we prefer to simulate the self-modification at a higher level rather that jump into execution space and twiddle bits. This is more to do with the relative difficulty of the two activities rather than any philosophical difference.

Yes, precisely my point; there is a more complex sense in which an artificial entity could come to think; not that it would start thoughtfully doing the tasks for which we designed it, but rather that the proper execution of these tasks would create a sufficiently rich and complex environment in which thought processes could develop emergently; or to put it another way, saying that the internet cannot think because it is just a system for the dispassionate and rule-based storage, indexing and retrieval of data may be similar to saying that the human brain cannot think because it is just a system for the dispassionate and rule-based implementation of electrochemical reactions.

[Manuel Garcia O’Kelly]
“Bog, is the 'Net one of your creatures, too?”
[/MGO’K]

Nor did I say that passing the Turing Test was the inviolable standard for intelligence in computers - I merely wished to make it clear that while jayrot seemed to think it was a matter of having “a few flaws to work out”, we are in actuality extremely far away from having computers that can parse sentences and respond to them. In fact, we haven’t even made much progress in creating a machine to pass the Turing Test: to this day the most effective programs for it are still silly and simplistic Eliza-style chatterbots. Much has been learned about human cognition in the process, though - for instance, about the role of knowledge of the real world in parsing sentences (in terms of resolving ambiguous phrases by examining whether they make sense, and so forth.) Handling human speech is not the only meaningful criterion for intelligence - it’s probably not even a particularly important one. Nevertheless, it’s something we can’t seem to manage.

Once again the inherent sociality of humanity is ignored in an otherwise intelligent discussion. So you want to build AI? Let’s start at the beginning: RI. Real Intelligence. What’s the only example we have of an intelligence we can communicate with? Us. Ourselves. Okay, a few other species as well. The common element that we share amongst ourselves and those other species is social interaction. I go to lunch with a friend, and we talk to each other. Primarily through body language, facial expression, tone of voice and inflection, and actual words spoken. If my friend intones her voice in a certain way, I immediately recognize something I call “sarcasm.” Et cetera. How did I do that? I certainly wasn’t born with the ability to recognize sarcasm. I had to learn it.

Surely we can all agree that an artificial intelligence would have a mind. Whatever connotations that term has for you, how did you arrive at those? You weren’t born knowing what a mind is. You learned it from what people around you said or did. So right from the beginning, our very concept of mind is culturally based. For us to conclude that some artificial creation has a mind, it has to act in ways that fit our definition of mind, a definition reinforced primarily and overwhelmingly by interacting with other humans.

So any intelligence, biological or technological, that we can interact with and pronounce as intelligent, has to a) interact with us (obviously), and b) learn how to do so. The point is, we won’t recognize something as being intelligent unless we can read its posture, its facial expressions, its tone of voice, etc.

There’s a famous sociologist named Leslie Brothers. Go read her book titled Mistaken Identity: The Mind-Brain Problem Reconsidered. In it she discusses the current prevailing theory that individuals each have a mind that arises from the complex inner workings of their brains, and why that is wrong. In fact, as she says, “The body is a node in the dynamic web of social information (79).” In order for an artificial intelligence to be seen as sentient, it must be part of that web.

The whole, “A computer can’t do anything is wasn’t programmed to do” is rather a weak argument. Give a computer a large enough set of real world facts (data) and some powerful deductive/learning algorithms and you’d be no more able to tell what that computer will do in a given situation than any person. You can get very surprising results out of a computer even if you’re not using fancy genetic algorithms to change the underlying programming.

You could even easily conceive of a computer that just takes random collections of assembly code, executes them and sees if anything interesting happens (for some value of interesting).        After some millions of years, it might even have come up with some very powerful algorithms.     Who programmed it?      (Maybe you could say the person who decided what "interesting" was.  But what if one of the algorithms developed was a new way of testing "interesting"?)

Another fallacy is that there’s anything inherent “non-Turing” or non-computable in how humans think. Certainly no one has ever proven this, and the same computational problems that are intractable to computers are also very difficult for humans to solve. Yes, it’s ego-boosting to say that humans can make leaps of intuition that computers can’t make, but again, that’s more likely to be due to the large amount of data that humans have to work with and their ability to rapidly synthesis new data by combining what is known. Humans have very impressive hardware for doing a lot of computation in parallel (albeit each computation is painfully slow from a computer point of view)

As far as we know, there's nothing * inherent * in biological constructs that makes them superior to computers for purposes of consciousness or reasoning.    However, biological constructs have a few billion years head start.

As for the intelligence requires social interaction or real-world interaction, that’s not really a new thought. Clearly, an intelligent agent has to base its concepts on real-world experiences. There were a few grad students (Agre?) who made their mark in the early 90’s by pointing out this semi-obvious fact.

As for the “I’m an AI student and I don’t see it happening” argument, that’s also pretty weak (speaking as another former AI student). AI students 40 years ago did see it happening. And I’m not convinced that the algorithms developed for various of the parsers and case-based reasoning systems wouldn’t scale up reasonably well nowadays were someone only willing to pay for it.

One of the biggest obstacles to AI right now is that it’s currently out of vogue from a funding perspective so there aren’t a lot of people out there * trying * to build anything that would hold a reasonable conversation (for example). And those projects that do exist are generally on a fearfully pragmatic basis (direct this call to the most appropriate service rep) rather than trying to develop an independently intelligent entity. (Another huge obstacle is the hardware requirements likely haven’t been met yet in terms of packing enough computation in a reasonably-sized cluster, but that’s neither here nor there right now.)

We aren’t going to program sentience into a computer. But we might be able to evolve it using things like genetic algorithms and figuring out how to set up an environment that allows some form of natural selection to happen. I believe we’ve already done this to the point where we’ve made some robots that behave uncannily like various insects, without ever having programmed the ‘insect instructions’ into them.

Given a large enough computer and enough time, migtht we be able to set up a digital environment that would allow digital life to evolve?

It probably won’t be the internet that becomes sentient - it’ll be The Sims XII.