Artificial Intelligence - yea or nay?

Just because we can do something does not mean we should.
Not to get too Matrix-y or Terminator-esque, but do you think we (humans) should keep working toward creating AI? Sure, artificial insect intelligence or dog intelligence may be a very useful tool, but what if we can create AI robots with equivolent human intelligence (or better)? Something that smart could build more, and possibly better, copies of itself. In effect, we would have created a smarter, stronger, faster, etc. new artificial species (sounds like a Star Trek episode or two). Would we need to program in Asimovian ethics? (How many sci-fi references can I squeeze in here?) Do you think there would be more benefits or risks?

AI is going to be an absolute necessity, and I would say a result of, the exponential-and-then-some- increase of information over the past century. The need to organize, search through, and comnpile this information is spearheading some of the most powerful computer programming going around today. ColdFusion and Google are the most readily available examples of “Wowser” technology generally available, to my mind. So yes, we’re going to need AI eventually. Depending how you define it, we already have AI.
The problem, of course, is that we won’t stop making things bigger, better, faster, stronger (etc) until one of our little projects wipes us out. Do we really need nuclear warheads that can wipe out half a hemisphere? Of course not. But once the technology is out there, it become a challenge (and usually a profitable one) to take the technology to outrageous extents. We already have AI that can win at chess. Why not AI that can beat us at ‘our’ own games: namely virology, conflict, and information control.

A bit nihilistic, but we’ve all been doomed for a while now. Just not anytime soon, probably, so you do have to keep going to the gym. And put down those twinkies!

Yep, that’s my point. We create our future and we need to make choices before it’s too late. AI is great, but it seems unclear whether hyperintelligent AI would be a benefit or a hazard.

The scientists involved in the development of nuke weapons (Oppenheimer, Einstein, etc.) seem to have regretted their work on it.

AI is controllable as humans decide exactly what goes into each electronic brain. If we wish to make one like us, we will, but we will make it so it cannot destroy us. How would a single program running on a small mainframe ‘rule the world’ anyway? It’d be a big event if it learned to write its name. And the large-scale AI systems, like missile defense grids and power routing systems, would be so non-human as to be not in conflict with us. How can you compete with an entity you only realize as numbers in your databases and biases in your programming? The AI systems large enough to pose a threat would know very little of humans, simply because such knowledge would not be very helpful in its task. It would not be human enough to posses a personality or any will to self-perpetuate. It would be very much an ‘it’, as it would exist solely to perform a specific task. Why would you give a data-miner, one of the most profitable AI paradigms, any sort of free will? The human-like AI systems will be kept small, used as companions once they become advanced enough, simply because there is no reason to give them much power. The AI systems with power will be kept non-human simply because there is no reason to make them human-like.

The world doesn’t have enough of the real stuff, now you want to fake it?

I forget where it came from, but someone once said, “The question of whether or not computers can think is precisely as interesting as the question of whether or not submarines can swim.” It’s a bit off topic, but I thought it was still relevent. (relavent?)

Argue with the wind. Someday, computers will surpass humans. There’s no way around it. It’s progress, it’s the future, and it’s a certainty. Arguing about it or even pressing for laws against it will only delay the inevitable. Our inventions are our grandchildren.

I’m agreeing with Bill, but not for the same reasons (I’m not a fatalist). AIs are the most effective way to do many things in a computer age. If you want something done that is very time-intensive and dull but requiring some learning and information-retention, AIs are your best option. They don’t get tired, take ill, or forget things (short of preventable crashes, of course). They also work cheap. Very soon, all weather forecasting systems will include advanced AI systems running chaos models and continually accepting input, revising those models as the info comes in. Those systems will give very accurate predictions, pushing the five-day barrier with reasonably accurate predictions for perhaps a week in advance. AI systems will be able to do things like that much better than humans because a human would get distracted with a passing human, miss some data, and ruin everything. AIs like that would not know of humans as having input on humans would not help them complete their programmed task. AI systems would do traffic control and route cars smoothly and efficiently based on patters of traffic, both programmed and observed. Being afraid of any AI is, frankly, more than a little naive. Computers will not ‘rule’ us or ‘own’ us because we’d never program them to do any such thing. They’d simply increase the profits of many industries.

First of all, Deep Blue was hardly intelligent, it simply punched a HELL of a lot of possible chess games through a bunch of stacked processors. Many of the tasks that AI would be applied to such as traffic control or weather prediction are so limited in scope that AI would be totally incapable of applying its power beyond those applications. The level of AI that actually has the capability to percieve and move beyond a very confined environment has no inate reason to attempt to overthrow human kind. Let us assume that if AI ever does surpass human intelligence, not all AI systems will want to overthrow the human race and will probably balance out “sociopathic” AI systems too human-kind’s general advantage.

But why would there ever be large sociopathic AIs? I can see a few psychiatrists making a seriously disturbed AI for study purposes, but nobody with access to it would allow it to exist beyond a very limited environment, perhaps one closed-off mainframe with no access to outside computer systems. The closest to ‘insane’ a large AI would come is if a few program files became corrupted and it began to behave in strange ways, such as predicting an area will have 1000% humidity. Easily correctable, even by error-checking AI programs.

threemae wrote

And what, threemae, do you propose happens inside Kasparov’s mind? Do you believe it’s magic that goes beyond mere circuitry?

By the very definition of “thinking” as in “Kasparov thinks about chess”, Deep Blue thinks about chess.

Ahh… one of my favorite subjects…

Touchy subject, the ethics of AI. First of all, though I never say never, I think we are a long, long, long way away from having AI strong enough to replace human intelligence in a generic sense. Sure there are some pretty good efforts in very domain specific genres, but contrary to propaganda you may have heard, it’s not organized like human intelligence. Well, in as much as we know how human intelligence is organized… but I digress.

Let’s assume that machines will one day exhibit intelligence that equals or surpasses our own. One question we must ask is, at what point does it cease to be artificial? Unless we define artificial as the medium it exists in, any intelligent entity should have the same liberties that we all enjoy.
Derleth said:

By the same argument we should be able to render humans controllable. If true intelligence is to be realized, one ingredient will be essential - random chance. If you have randomness, you have an infinite susceptibility to chaos (as in chaos theory). To assume we could control the intelligence and behavior of an artificial lifeform is as silly as to assume we can control the intelligence and behavior of future criminals…

Comparing data-mining AI to true intelligence is like comparing a grain of sand to a planet.

This is silly. Few people will be able to afford the first such systems. They will go where the money is. They will be used for dangerous missions and research where human-like intelligence is required.

Max wrote:

Ahh… but it won’t be fake, will it? It will merely be new. Yet another evolutionary step.
Derleth:

Because the history of human behavior is one of repression. A truely intelligent entity would want to have free will - we would demand that it do our bidding. One man’s sociopathic response is another machine’s freedom fight.
Bill posits:

I think this is a very poor assumption. By that argument, as Einstein thinks about math, my Casio calculator thinks about math. The truth is, however that we don’t know exactly what goes on when Kasparov thinks about chess, but we ARE 100% sure that what Deep Blue does is NOT what Kasparov does, or rather, not all of what Kasparov does.
And finally, Bill echoed the feelings of some other folks when he wrote:

I disagree. It’s not inevitable.

First, there’s no guarantee that the human race will last long enough to achieve that level of technology.

Second, it’s not clear what is going on in the conscious mind. We may never achieve a level of physics and biological understanding to duplicate or improve on the the mechanisms at work in the human brain. There’s no indication that merely building bigger and faster computers with access to more and more information will ever approach the intelligence of a human. That’s a bit like saying that we keep building faster and faster vehicles, so eventually we must build a vehicle that can reach and exceed the speed of light.

Third, Moore’s Law doesn’t apply in this domain. In fact, there is considerable evidence that Moore’s Law may be reaching it’s demise in the silicon technology arena as we approach some physical limits that seemingly cannot be overcome. Meanwhile, Moore’s second law (as it is sometimes called) suggests that as we get closer and closer to these physical limits, the cost to manufacture is growing exponentially, as well. We may reach the economic limits before we reach the technological limits.

I believe that true ‘artificial intelligence’ won’t come as a result of bigger and faster computers, but rather as a result of innovative new computing architectures, completly unlike anything we’ve dreamed of to date. The question remains, can we dream of them… eventually?

Max:
Why would we give probes or weapons systems freedom? No, there will be no need for them to have any sort of ‘freedom-loving’ systems. They will simply be intelligent enough to do one task and that’s it. Specialization is for insects and robots. Specialization kills generalized intelligence, like what humans have. Space probes, to use your example, will have enough freedom to decide what to do within strict constraints. They may decide which parts of Mars look most promising as far as colonization goes. They may not decide that Mars is a hellhole and they’d rather be in Cancun. See? Limits. Chaos theory is all well and good within its limits, but the industrial, scientific, and medical robots will not require such complexity. In fact, such complexity would be counterproductive as more complex systems can fail in more ways. Look at the difference between severe psychosis in humans and a very buggy program. Which would you rather try to fix? The smaller, companion-programs, more along the lines of what you want to discuss, may well require chaos programming, but they will not have the hardware to pose a serious threat. If one demands too much, the mainframe it runs on will be unplugged, doubtless causing severe brain damage, possibly a doozy of a Korsakov’s if information encoding (memory) is tied into the chaotic personality programs (doubtless the only way to go to make human-like AI) and the personality files get damaged. Getting back on track, human-like AIs may well demand equality. I see nothing wrong with granting it to peaceful, sentient ones. However, there is a difference between sentient and merely intelligent. A dog can be said to be intelligent. It possesses the ability to learn and retain information. However, a dog is not sentient in that is has no sense of self or ego. A dog could never comprehend the concept ‘I am’, and would not be able to extricate a sense of what it was as an individual from the hardwired drives and instincts.
Regarding Moore’s Second Law, it only applies as long as we use silicon. I agree that Moore’s First Law will soon be obsolete as it applies to silicon. However, as Scientific American will bear out, we are doing amazing things in data-processing fields as diverse as optical computers and molecular switches. Moore’s Laws will be replaced by something even less restrictive.
Links of interest:
http://www.scientificamerican.com/
-particularly: http://www.scientificamerican.com/2000/0600issue/0600currentissue.html
-Feynman’s speech ‘There’s Plenty of Room at the Bottom’, one of the first on the possibility of nanotechnology. Also gives a link to K. Eric Drexler’s paper Nanosystems: molecular machinery, manufacturing, and computation, an introduction to nanotechnology:
http://www.zyvex.com/nanotech/feynman.html

JoeyBlades wrote

True. But that’s just a diversion from the real discussion. If human’s keep going, they will eventually build machines that surpass them in all measuarable ways.

Well, technically it’s true that the exact algorithms and the underlying circuitry is different. But that’s a technicality. Think about this: How do Karpov and Kasparov think about chess? Sure, the hardware is very similar. But you can bet that huge hunks of the software are vastly different. Does that imply that one of them “thinks” about chess and the other doesn’t? Of course not. Thinking is the process of processing information to a given end. And that is what Big Blue and Kasparov do. They both think about chess. And yes, what goes on in a Casio is related to what went on in Einstein’s brain, although obviously on a different scale.

Au contraire. Here’s a bit of proof that a machine can be built which will match the human brain: The human brain. It exists. We’ve seen it. We know it’s smart. We know it’s made of molecules. We know there’s no magic inside.

It’s only a matter of time before we can understand and build one. And knowing the human brain exists, and knowing a bit about it’s workings, it’s not hard to believe that a better one can be built.

Of course; that’s a given. Did you think we were talking about building a smarter-than-human machine with left over 486s?

True, but we didn’t design or build the human brain. Proving that a machine as smart as a human brain can be built is not the same as proving we can build one. I majored in cognitive science, and our understanding of how our brains work on the process level is only a few steps above phrenology. We barely have a toe-hold in the basic perceptual/motor systems, and our models of higher cognitive functions are honestly not far removed from magic.

I’m not saying it can’t be done, but it won’t be in my lifetime.

Dumbguy wrote

I agree; quelle domage. And that’s quite a shame. I think we only missed it by a few generations; so close and yet so far.

Bill, the important difference in how Deep Blue thinks of chess and Kasparov thinks of chess lies within the reasoning. When Kasparov thinks of a chess game, he does not systematically try moving a single piece and then play every possible chess game from there on out. Certain solutions often appear to him in the form of realizing what are likely to be the most influential players on the chess board. This method is totally different from how Deep Blue thinks of chess. Deep Blue simply tried to think of every possible chess game in existance and play from that. This is a different way of thinking from Kasparov, and it varies in many important ways. Primarily, the programing of Deep Blue provides no chances for learning on its own. Also, it provides no creativity or adaptability beyond the chess board. These differences are more than a mere “technicality”.

threemae wrote

Mostly true but irrelevant. I say “mostly” because Deep Blue doesn’t just “try every possible game”. He has smarts that enable him to discard unlikely situations and focus on better ones. But irrelevant, because thinking different doesn’t mean not thinking. As I say, the way that Kasparov processes chess information and the way that Karpov are no doubt vastly different. Would you say that one thinks and the other doesn’t?

Incorrect. Deep Blue not only learns by being fed great chess games by it’s programmers, but it also learns on the fly and adapts to it’s opponent.

I think what you mean by this is that it can’t, for example, make a cheese omelete. Well, that’s true, but I don’t see the point. We were talking about “thinking about chess”, weren’t we?

For any particular thing that humans do, machines can now or in principle do it better: cars move faster, cranes lift heavier loads.

Likewise, it’s reasonable to expect that particular expert systems will eventually surpass humans in every field, in exactly the way that Deep Blue beat Kasparov at chess - by a more powerful method (their functional equivalence does not entail ‘mechanical’ equivalence). Medical expert systems will be able to more accurately diagnose patients; genetic algorythms will find better designs for bridges and skyscrapers.

What’s the point of a general AI, though? The kind of monstrous AI that’s a stock villain of science fiction serves little imaginable purpose, and seems very uneconomical, when specialised systems would do the same job in each case, cheaper.

Mechanically, we don’t build general purpose robots: it’s cheaper and more effective to build specialised machines directly suited to their task. Why wouldn’t expert systems be the same? In what possible context could an aggregate of expert systems come together to be the exact sort of superhuman tyrant we all seem to fear? Machines surpassed human physical abilities long ago, but no one would say that we’ve abdicated our physical sovereignty to machines.

Hansel, you’ve just restated what I’ve been saying all along, and perhaps because of how you said it, people will understand. I think specialization is the wave of the future as far as machines are concerned. And as machines become more specialized, humans can do more general things. You know, what would be considered counterproductive and unprofitable today. As there will be no room for humans in unskilled (already happened, look at assembly lines) and semi-skilled (nearly happened, there are already diagnostic programs [for humans and machines] and one machine that can do one specific surgical procedure very well) professions, humans will need to get more educated to fill less specialized, more skilled jobs, such as writing, perhaps, or musical composition.