None of these things are AIs in the sense meant here. A true AI will be able to play chess - but perhaps not as well as a specialized chess program.
Dr. A was hardly the first writer to describe a non-Frankenstein’s monster AI. The Binders in the story called “I, Robot” did it before, and I wouldn’t swear they were the first. In any case, the robot series is not a good example for you. Remember it ends up with an AI manipulating the history of the galaxy for our own good. Exactly my point.
IIRC the artificial beings in the tale you are referring to are extraterrestrial, if we are dealing with what we can do in the future with artificial beings the use of Williamson’s story is going for Phlebotinum counter points. I do not think we should resort to assorted plot fuel from fiction or applied Phlebotinum to deal with this issue.
BTW I did point out that the bulk of Asimov stories do refer to the unfounded fears, not the plots of his tales. And the dealing with the unfounded fear is my point, that there were other tales of Asimov that were less optimistic was already noticed.
Seems like a “no true Scotsman” argument, although I think I do remember seeing a robot Scotsman in Futurama.
Again, if the assumption is that the AI and robotics people are concentrating on a general purpose AI then this worry would make sense. In real life virtually all are not.
Tricky to guess. The AI might have more internal variability of cognitive resources. Just as you and I can improve our game by concentrating really hard, the AI might improve its game by swapping out huge chunks of CPU cycles, taking them away from some other task, and dedicating them to visualizing chess moves.
(I’m leaving aside the obvious possibility that the AI might simply connect to a dedicated chess-playing system as a peripheral. But, then again, people spend long hours memorizing chess openings, and that is a kind of external processing. Is someone who has an encyclopedic knowledge of chess games really “playing” the game, or is he applying external knowledge to the problem? If the AI attaches a chess-playing “expert system” to itself – incorporating it into its actual mind – is that still “playing” or just “tool-using?”)
Define ‘our’. I bet there’d be a crap-ton of people who would agree with the AI proposition that it is in the best long term interest of humanity for there to be 5 billion less of us. so long as that 5 billion were from ‘over there’. And i bet some people would be willing to let the AI get on with it.
I’m not trying to pull an argumentum ab auctoritate but I am a demon summoner, I mean AI researcher. Frankly, based on my experience with AI, we don’t have anything to worry about for a long time. AIs currently struggle significantly with very large problems. One of my primary tasks is finding the simplest problem that can be solved by my AI and then adding problem extensions until it can’t solve it anymore. AIs generally struggle with a large number of variables. For example, one class of AI is the multi-objective optimizer (an AI which tries to find an optimal solution while respecting multiple objectives, e.g. buying the most nutritious food while minimizing expense) struggle with more than 3-4 objectives. Now, of course, this will get better with faster computers but the idea of a truly powerful malignant AI is a long way off. I’ll be good and dead so I don’t care (I jest). That being said, the possibility of an AI accidentally causing problems, for example the hyper-speed autotrading on the market is at least partially AI driven, is a very real possibility.
The problem is two-fold. There is the problem of bugs, always an issue in software, but even ignoring bugs for the moment one issue with AIs is that they don’t necessarily explain why they’re doing something, and the outputs from AIs can be easily misunderstood. One thing that we have to do in AI research is show (at least to ourselves) that the AI is actually do what we think it is doing. This might be a little hard to understand but I’ll try to explain, if anybody has any follow up questions I’ll be glad to answer them. An AI generally outputs a set of numbers, we assume that these numbers have meaning. So for example I’m building an AI that can identify human cognitive characteristics from their behaviours. My AIs produces numbers. Are those numbers cognitive characteristics or is the AI (unknowningly) solving a different problem and the numbers look reasonable so I assume it is solving the problem I think I coded it to solve. To validate this, I apply known cases to the AI and see if it produces the known results. If so I can say, yes, it is finding human cognitive behaviours. However, this isn’t always done, especially in industry, and in some cases might not even be possible.
So overall. I’m not worried about an AI that will destroy humanity, but rather buggy AI or misunderstood AIs harming human enterprises.
Yeah, the unfounded fears that he (and the Binders) were addressing were the covers of the pulps where a robot was carrying off a scantily clad blonde for Og knows what purpose. Oiling him, I guess. We’re dealing with more subtle issues here.
Where the humanoids came from is irrelevant. The point is that they enslaved us for our own good. Anyone could probably rewrite it with locally generated robots, but I think he wanted to place it in the present, and back then no one could even plausibly create such things.
When I took AI just over 40 years ago there was a list of problems that were being worked on. Nearly all have been solved - the automated solution of complex mathematical equations, routing trips, a chess program that could beat a grandmaster, etc. I don’t think anyone would say we have true AI today - not even Watson. We’ve got excellent specialists but no generalists.
That’s an excellent point. Most visions of AI come from the mainframe days, where you had a hunk of code acting like an intelligent being and that was that. But we’ll probably see an AI who can download apps as needed. An AI wishing to play chess can load a chess-playing app at the appropriate level.
As it turns out one of the best things that ever happened to me was failing my Lisp test at the MIT AI Lab. It drove me to less frustrating lines of research.
When you say you code to solve a problem, are you coding to solve the problem or coding heuristics which will then be used to solve the problem? What I’ve seen in many areas is that initial solutions using heuristics become specific solutions using faster algorithms as the problem gets better understood. Researchers committed to things like genetic algorithms sometimes back up and implement them for solved problems, but they never get any traction.
The AI could turn into the greatest hacker ever, or that is one danger. It has the speed and the time and the patience to get into anything it wants too.
As far as an off switch … in theory, the hacker AI could get into the systems of support vendors like electric companies, and issue purchase and service orders to, say, install some new power cables somewhere that wouldn’t be obvious to someone sitting in the same room as the server. The AI could redesign whatever parts of itself it thought were vulnerable, and than have humans come in and do bits and pieces of the redesign work until … the off switch doesn’t work any more.
How possible this is, I dunno, but it’s probably at least slightly so.
The other thing that missing is that someone will be experimenting and building hardware and stumble into AI without even considering putting in safeguards against an evil AI.
At the moment, I’m mainly working in the area of applications of AI primarily in education. So my work is on solving a particular problem, in this case, identifying cognitive characteristics of students’ from their behaviours when using educational software by means of AI. So I’m taking specific existing algorithmic approaches and measuring their success at solving the problem. I am also proposing two heuristics that provide better solutions but you’ll have to wait until the paper is published for the details.
Here’s a question I have: we can dissect very simple, primitive organisms and see how their very simple nervous systems and brains work, can’t we? What’s stopping us from studying the simplest organisms, then the next higher up, etc. until we get to mammalian brains, and then to human brains. The human brain is too complex for us to fully understand, but what’s stopping us from more or less just working our way up the ladder, from the simplest brains to more advanced brains?
I agree with that proposition (although I do not agree with the idea that the reduction in population should be from “over there.” It should be fairly distributed among all humankind.)
Also, it doesn’t have to be quick, nasty, or catastrophic. Just persuading people to have fewer children would do the deal.
An AI that is much smarter than we are will also probably be much more persuasive than our current leadership.