Artificial Intelligence

Any good articles or opinions on the future of AI, including fears and (reasonable) predictions?

Welcome to GD, WSMN! Not I, you can be assured, but it may not be stolen, merely mislaid. Try Lost Property.

A good tip is to use the top right button (Search) to get a feel for what has been discussed here before, and frame your Opening Proposition a little more specifically so as to develop a particular theme which might not have been covered.

A great deal of ground was covered in the recent thread entitled Moral Implications of AI.

If you are looking for predictions, I’d suggest Ray Kurzweil’s book The Age of Spiritual Machines. He’s far too optimistic for me, but there’s no reason most of his predictions cannot come true eventually.

Dr Anders Sandberg has contributed some good stuff to our site, particularly this;
(warning; science fiction)
http://www.orionsarm.com/historical/AI_Political_Science.html

Basically AI minds could develop in a myriad directions, and we really need to concern ourselves only with those that are interested in humanity, for good or for ill;

it is possible that humanity will be irrelevant to a large proportion of the new mind types that emerge.

A number of fictional (& some non-fictional) takes on the subject of AI were gathered in The Mind’s I, edited by Daniel C. Dennett & Douglas Hofstadter. Though it’s over twenty years old, there’s still some useful articles that make it a collection worth reading (like Turing’s seminal test proposal). Moreover, there hasn’t been a great deal of advancement in the last few decades of the sort of AI discussed in the book. It focuses more on the metaphysics of the mind & conciousness, but these questions are no doubt an important part of AI.

thanks.

Also, I would highly recommend reading A.M. Turing’s Report on AI . Yes, it’s from 1950, but it’s chock full of interesting information (and predictions).
LilShieste

for an introduction to one of the more talked-about topics in a.i., you might wish to peruse a thread i started a while ago: Strong AI and the Chinese Room.

it discusses john searle’s “refutation” of “strong a.i.”, the idea that computers can think in the same sense that we do.

to get an idea of the state of the art itself, i would suggest picking up (you might find it in your local college library) a copy of russell and norvig’s textbook “Artificial Intelligence.” there are some great stories in the introduction and the appendix, and if you’re into computer science, you can learn quite a bit about where the art itself is.

at the moment, “artificial intelligence”, in computer science, generally refers to a group of categories including advanced search algorithms, planning, and machine learning.

There was something called THE CYC PROJECT going on years ago but a haven’t heard about it in a long time.

They are just making faster, more powerful von Neumann machines. These machines manipulate symbols according to a program. They don’t UNDERSTAND the symbols.

No Intelligence

Dal Timgar

If the symbols learned can manipulate the way the program learns symbols, then they are indeed UNDERSTOOD - they mean something to the program. They may not mean the same thing to you, but that doesn’t matter.

The true test will be if the program can learn in such a way that it and humans can share common symbols. Therein lies the intelligence.

AI will be here 10 - 15 years from now.

That’s been true for the past 35 years or so, at least. That’s when I took AI at MIT. I don’t think there is anything inherently impossible about it (and I know all the standard refutations,) but it is a lot harder than expected, and all the money goes into areas, like algorithm development, where there is immediate payoff.

I say AI is here, now. The computer I’m typing this on is considerably more intelligent than, say, my dog. And don’t give me that “That’s only because it’s designed that way” stuff either. Taking both sides of the Creationism coin for a minute, either we’re designed to be how we are or we arrived there by a long series of coincedences. Either way, I fail to see how a computer being designed makes it any less intelligent.

Now, if we’re talking about human-like AI, who says there will ever be such a thing? There’s no reason for a computer to behave like a primate. It will always behave like a piece of designed hardware, which is often a better way to behave. A server that can take orders, manage inventory, and automatically direct shipping and confirmation emails is already showing the practical intelligence of many humans applied to the same tasks.