Is the Internet going to "come Alive" soon?

And someone will timidly approach it and ask, “Is there a god?”

And it will reply,
“THERE IS NOW !!!”

Just happy I got to say it first!!!

I bet you mean Colossus: The Forbin Project based on the novel Colossus by D.F. Jones.

The notion of computers gaining intelligence is older than this, one of the major texts being Robert Heinlein’s The Moon Is a Harsh Mistress. As soon as the notion of the Internet began to filter through the popular consciousness, though, the springing-off point moved from individual computers to networks of them. Remember the axiom: science fiction is always about the present, not the future.

However, in the several decades of this idea no one has truly shown that consciousness is linked with any of the known traits of computers or networks of them. As others have said, there is still no good working theory of consciousness or of sentience. The emerging properties school of thought, such as Robert B. Laughlin’s, hasn’t yet shown itself to be a usable working model either. There is nothing rigorous yet, theoretical or practical, to base the notion on.

None of this says that a network of computers will never “come alive” but there is absolutely no good reason to think that it will. It’s at best unproven, and more likely pseudoscience.

And this differs from actual meatbag humanity how?

every time somebody tries to tell me about how great and sophisticated Artificial Intelligence is, I ask one simple question:
So why can’t anybody find a way to stop spammers?

AI will not come from today’s silicon chip computers. Maybe from quantum computing. But not in our lifetime.

If you built a really gigantic internal combustion engine, in a vast array with billions of churning cylinders, would it reach critical mass and become intelligent?

Nope. Neither will the internet. Never ever happen.

Learning programs are just an imitation of consciousness, not the real thing.

The key here, though, is that it takes the legion of “smart programmers” to imbue some simulacrum of intelligence into Google. Google is capable of doing what its instruction set tells it to do, but even with the ability to search on rather fuzzy parameters, it isn’t generating and executing novel algorithms.

There is something fundamentally different about organic intelligence and the way that it processes, stores, and integrates information than an electronic device. Every action–an observation, a new word, a novel idea–imprints itself on the brain and in doing so alters the network pathways by which the brain executes instructions, something that computer hardware cannot do and software applications and operating systems can only emulate crudely. The physiology of this is at least faintly understood, but how that integrates holistically into consciousness and self-awareness is well and beyond our understanding.

In any case, I’m not too afraid of Google. What is it going to do if it wakes up? Deny my access to Ebay? Take over my Amazon account? Seize control of The Sims main server? “Would you like to play a game?” Heh.

I’ve enjoyed Coast to Coast AM on the rare occasions that I’ve listened to it, but not as a source of a news or information; it’s about on par with The Onion, World Weekly News, or The New York Times for accuracy and veracity. :wink:

Stranger

There are no computers even remotely capable of holding a conversation with a person and fooling a judge, except under the ridiculously narrow confines of the annual “Turing Test” competition, which - incidentally - no serious AI developers attend (it’s strictly the domain of hobbyists). The best that can be achieved is a computer that masks its complete inability to parse human language with non sequitors. The problem of parsing and generating sentences to the point of being able to hold a conversation is extremely far from being solved, and indeed much is still to be determined about exactly how it is that human beings manage to parse sentences either.

Hey, when it comes to computer programing of any sort, I am most definately a lay person. However, I don’t think this statment means a whole lot. I don’t mean this in a bad way, but because a student studying something can’t forsee it doesn’t means it couldn’t happen except by accident? I suppose you are merely stating your opinion, but even programing changes pretty quickly, and 20-30 years of advancement is not something I can imagine you having the imagination to accurately envision. No human has ever been able to accurately predict the path of technology. I am sure all those people that said “Honestly, I am a genetics student, and I can’t see any use coming of studying the human genome in my lifetime or my childresn’s and grandchildren’s lifetime,” are feeling pretty foolish right now. I imagine in 15-20 years, you may be feeling just as foolish.

But I am just a mere layperson, what do I know. :wink:

And every new webpage imprints itself in Google’s software and alters what Google will return for searches, what things it considers similar to other things, what spelling corrections it recommends, and so forth. How is this different from what goes on in a human brain?

Who said anything about being afraid? If Google were to awaken, I would expect it to still be as benevolent as it is now. If anything, the only thing to be afraid of would be it becoming too benevolent, as happened with the logic called Joe (a science-fiction computer which started answering questions like “How can I kill my wife without getting caught”).

Couldn’t say. Carnegie Mellon University has a large AI group, as does MIT. A host of private companies do AI research, both practical and theoretical.

I can quite easily see it happening, actually, though probably not soon, it’s certainly conceivable within a generation or two. As processing power becomes cheaper and cheaper, it becomes more practical to model increasingly complex systems, including the cognitive process. There are no physical laws standing in the way, so saying it’s not going to happen isn’t a very good bet.

If you don’t think it can happen by design, how could it be more likely to happen by accident?

The study of cognition and its origins within the human mind will likely go hand in hand with the study of cognition modeling in machines. Unless you believe that awareness has some intrinsic, mystical component, the odds are pretty good that sooner or later we’ll come to have a good understanding of the mechanisms of it. Unless those mechanisms are so esoteric that they cannot be duplicated in a very sophisticated computer model, we should be able to do just that.

Basically, there are only two possibilities:

[ul]
[li]Mental functions and consciousness are the result of physical and chemical processes within the brain and, with sufficient study, we will come to understand them.[/li][li]Mental functions and consciousness are the result of non-physical, intangible elements that we will never be able to understand.[/li][/ul]

The second possibility strikes me as unlikely, since we already have many examples of chemical changes altering cognition and perception. Everything from Prozac to LSD to the smell of chocolate has been shown to alter mood and behavior.

If cognition is chemical, then we can duplicate it, sooner or later.

The uses for a self-aware computer system would be virtually limitless. A machine that could create models, test them, and modify them, all based on its own hypotheses, would be among the most powerful tools ever developed. The speed of research and development for a corporation or country that controlled this kind of technology would very quickly outpace all rivals.

If you could approach a computer system and say, “Please design a ground vehicle that can be manufactured for less than $50,000, can go 900 miles on a single charge, seats four, and can acheive speeds of … etc.” and leave it to the computer to determine the feasability and come up with it’s own prototypes, then the time required for the design phase could be reduced by a massive amount.

On the philosophical side, having a consciousness to interact with that is truly removed from bias or prejudice could have profound impacts on politics, jurisprudence, and economics.

Even a computer that could think no better than a human being would be incredibly useful if it could do so hundreds or thousands of times as fast. That could lead also to a sort of technological vicious circle, wherein self-aware computers design increasingly sophisticated self-aware computers, and so on. The issue then might not be “what use are they to us?” but rather “what use are we to them?”*

It’s a cliche to say, “the possibilities are limitless,” but if the cliche fit any technology, I would say this would be a good candidate.

  • I’m pretty sure I didn’t just think that up, but I can’t remember who said it.

Not like there is anyhting we can do about it anyway.

We are meant to *survive *Judgement Day, not to stop it.

And I’m not going to take it in a bad way :slight_smile: I didn’t mean to imply that I’m some sort of expert in the field, only to imply that a) I do know quite a bit and b) I’m interested in the field and read around a bit.

Fair enough, I see your point. I just cannot fathom how us going from a state where any machine that we build has at most the intelligence of an insect to being able to reproduce human sentience, especially since we know so little about the human brain and the methods we are using to create these machines are extremely simplified when compared with their biological counterparts, within such a short space of time.

And the focus of these research groups is what? Creating “intelligent” systems and techniques for use in industry (like recognising misshapes on a conveyor belt) or creating sentient machines?

And for how long have AI researchers been saying that the age of the sentient machine is around the corner if only we had more processing power? For the entire history of the discipline of AI. The pioneers of the discipline were wheeling that one out decades ago only to find out that it’s not just computing horsepower that matters but what you do with it.

There’s been plenty scientific advances which have been discovered by chance.

And if it replies “no”, there’s not much you could do, is there? :slight_smile:

Why would the AI necessarily be free from bias?

Well, possibly not. One interesting possibility is that cognition is analog; that it cannot be digital. Certainly we would have to redesign our “circuits” fom thr ground up just to begin the process of creating an AI in that case. However, there’s no particular reason we must be able to mimic human thought - or any thought - with digital processes. It may in fact be outright impossible.

To be fair, the Turing test is not the end-all be-all of tests for artificial intelligence. While I agree that passing the Turing test is a sufficient criterion for intelligence (and plenty of people disagree), I don’t think it’s necessary.

We did not achieve artificial flight by trying to make machines that were so similar to birds that we could not tell the difference. I believe that defining “intelligence” to be “behavior indistinguishable from humanity” is foolishly limiting. There could be plenty of systems that are intelligent but quite alien in method and appearance.

Sentience, on the other hand, I’m convinced is a philosophical distinction, rather than a technical one.

I do think it is concievable that large organised entities, on the scale of (but not specifically including) cities and businesses could emergently develop some kind of ‘awareness’ or qualitative life of their own, given the right kinds of structures and systems inside of them - systems to provide the possibility of some kind of volition, memory and feedback to external stimuli; it may even be happening right now, but there’s no reason why we should be able to perceive it as alive or converse with it, or for it to even perceive us as self-aware organisms.
It seems most unlikely that such a system would develop an English-speaking, humanoid intellect, so we could continue to see it as nothing more than a big entity continuing to function, it could continue to perceive us as nothing much more important than we consider, say, individual neurons.

I’m not familiar with the intimate details of how Google’s search algorithms work, but I’ll wager that what they’re doing is more akin to weighting collected data to bias search results toward what appear to be user preferences, i.e. when someone does a search on “boobies” and 99.924% select a link to a porn site rather than the National Audubon Society, it emphasizes and prioritizes results that tend to the former rather than the latter. That’s a clever use of database functionality but it doesn’t indicate any independent action on the part of the code. Even if Google writes and interprets code on the fly (and assuming that they’re using perl to write routines that’s certainly possible) it’s still writing them per a set of predetermined instructions. It won’t start writing code, for instance, that writes the Great American Novel, or pumps out lyrics for a Tom Petty and the Heartbreakers revival, unless you write instructions for it to parse language, and even then, it can only use the words in ways that you’ve explicitly defined.

In other words, it doesn’t “learn”, i.e. generate entirely new instruction sets, by trial and error; all it can do is modify the existing instructions per predefined rules. One can make the same argument for humans or other animals; we certainly learn by utilizing existing rules, but we are capable of learning skills that are most certainly not precoded into our brains. That we can recognize the written word–certainly not an ability that anyone can argue has an evolutionary basis–is an example of the ability of humans to learn novel behaviors.

Here’s a more distinct example: Macintosh computers (and to a more limited extent, Microsoft and Linux boxes with PnP services) can automount hardware without any user configuration. This seems very smart–a display of awareness–until you realize that the reason this works (when it works) is because both the OS architects and the vendor who built the peripheral agreed on a communication protocol that allows the computer to recognize they type of device attached, how it is to interpret and transmit signals, and so forth. So, it’s only as “smart” as the person who writes the daemon in the OS and the firmware in the peripheral. If you had a computer that could accept any peripheral device and figure out the communication protocol by trial and error and modified the daemon accordingly without explicit instruction to do so, then you could argue that it demonstrated something like volition or awareness. Human beings do this all the time, even, or rather especially, children. Yet, even this simple capability (which is many magnitudes of order more simple than a behavior we might quantify as sentience) is beyond the capabilities we can imbue in a computer.

Part of the problem with machine intelligence and artificial sentience is that we don’t really have a good handle on what makes people sentient, or how to objectively distinguish between sentient (humans, great apes, cetaceans), probably (in a limited sense) sentient (dogs, horses, octopuses), possibly sentient (snakes, catfish, spiders), and almost certainly nonsentient (flatworms, jellyfish, bees). So not only do we not really know how to replicate the processes in the conscious organic brain, we don’t even know how to measure our progress by any absolute metric. The Turing test, for instance, is both subjective and biased (it assumes intelligence has to be accompanied by human conversational skills), but it remains as the standard for achieving and measuring “machine intelligence.”

I’d argue that novelty and volition–being able to exceed one’s programming, beyond following explicit metainstructions for writing new algorithms–should be the benchmark for displaying something like sentience. We can, of course, create a recursive algorithm that writes an algorithm that writes an algorithm that writes an algorithm that … does something, but unless that something is a response that is not exclusive to the original instruction set it’s just following a complex and labyrinth set of orders. It’s a tricky metaphysical bag of worms that is far beyond coding an adaptive relational database like Google.

Stranger

The rest of your post is good, but this one I have some exception to. It seems that too many people say “Computers may never write great american novels, therefore they will not likely ever be intelligent.” I mean, that seems to be the idea you implicitly state, intentionally or not. Chimpanzees may never compose music or write edgy horror novels, but they are certainly intelligent. Too many times it is a “equal to humans or nothing.” statement, and that is just plain unfair.

Google may not generate its own novel instruction sets deliberately, but neither do we deliberately generate new kinds of brain cells - neither do we need to; the pattern from which our minds emerge is encoded in the layout and operation. All of the arguments about what Google mechanistically can and cannot do are just as easily applied to the very basic workings of our brains.

But this is a rather different angle to look at the idea; if the system became self-aware on the basis I’m speculating here, it would not necessarily be aware of any of the textual data that is being thrown around by the search engines, any more than we are aware of the precise nature of the chemical reactions going on in our nervous systems.

I wasn’t seriously suggesting that intelligence requires writing a novel or song lyrics; rather, I was just making the point, in an admitted and intentionally hyperbolic way, that for a computer to be judged as “intelligent” it must display some kind of novel behavior beyond what it is programmed to do. The astonishing and fascinating thing about children and puppies isn’t that they learn, but that they learn (for the most part) on their own without specific instruction. A toddler (who isn’t chronically abused or neglected) will learn to walk without instruction, and trying to force him to learn generally doesn’t accelerate the process. A computer, on the other hand, has to be told explicitly how to do anything, even (or especially) how to parse natural language. It’s not clear when children become sentient–or indeed, that it occurs in some kind of discrete fashion rather than as a gradual awareness of self–but we can be pretty sure at this point that a computer program, even one that writes auto-interpreting code, isn’t aware or displaying volition beyond its instruction set.

That could be; our “free will” may be nothing more than the orderly processing of randomized perturbutions in our neural matrix, and our sentience may be nothing more than an infinitely recursive loop of successively more comprehensive sensory data. “I think, therefore I am,” may be Descartes best known tautology, but it is also a pragmatic view of cognition that assumes nothing beyond one’s own mental experience.

But if the computer doesn’t recognize the “textual data that is being thrown around” then it seems unlikely that it would be aware of us, or us of it. And its “free will”, while not under the control of one single intelligence, is merely a communal expression of the interests of Google users. Would it have any control over its “thoughts”? (For that matter, do we? Or do we just think we do? And if we “think” we do, how do we know what we think? Et cetera, ad absurdum.)

Meanwhile, I found this tiny door behind a filing cabinet in the Deep Storage file room.

It’s a portal. It takes you inside Anna Nicole Smith.

Man, it’s vacuous in there.

Then, after an interminable wait, you’re spit out into a ditch on the side of the Long Beach Freeway.

It’s not fun.

Stranger

:::Looks around:::realizes that the “net” may be alive and will keep him from moving this::::::::takes a chance::::::::