In the Asimov universe the robots are programmed in the factory, and the 3 laws are an integral parrt of their programming. I don’t recall them having an RS232 port to allow someone to hook up a terminal to reprogram them.
Also, the 3 laws are so essential to the discipline of Robotics as understood in Asimov’s stories, that to make a robot without them, you’d have to recreate the science practically from the ground up. It’d be like trying to build a fully-functional computer without binary. Possible, but beyond the reach of any single maniac.
I too think that the laws are wildly impractical. So many actions are potentially harmful that any robot would be paralyzed by fear of harming a human. Say I ordered my robot to get me ice cream. Now, ice cream is a fattening food, and a steady diet of nothing but would definitely be considered “harm”, and my robot would balk at that. But what about daily? Weekly? Ever? Arguably, indulging in fatty foods at all could be considered harmful, but so could depriving me of a treat I enjoy. Similarly, would my robot refuse to drive a car since the possibility that it could loose control and harm a human is less than zero? How the hell are we supposed to give a robot a workable definition of “harm” that provides for all possible situations?
What is the qualitative difference between how a brain “works” and how a computer “works”? I would imagine a brain, at the heart of it, is following some sort of algorithm, and obeying some sort of “if x then y” procedure, albeit a highly complex one, isn’t it? What does a human chess-master do other than spend a lifetime learning strategies that work under various conditions, and implementing those strategies when called for? Original thought is not employed in a chess game; if that were the case, a rank amateur would be able to beat an experienced player simply through original thinking. What makes a good chess player is memorizing large amounts of information as to what response is effective in a given situation.
Sue he could - all he’d have to do is sweep off all the pieces and whack the experienced player in the head with the board. That’s something a computer would never think to do.
Sorry, no. In the limited universe of chess, there are far fewer alternate means of doing things, many of which have been explored, written down and studied by the experts. Any original thinking is likely to be in the regions of “chess space” not yet fully explored by the experts; any original thinking done by an amateur might be original to them, but was probably first thought of a hundred years ago.
It’s not like the real universe, which is complex and free form enough that there are any number of simple ideas that nobody’s thought of before. And even in the real world, creativity without knowledge will only take you so far.
I’m not sure it’s true were no closer conceptually. Certainly I think any foreseeable heuristic approach to AI is not going to produce anything that can actually think and have an inner life as we know it.
But that’s not to say we haven’t made advances. Neural Network Processing - although now quite old hat - is still an interesting and I think promising idea - partly because it somewhat mimics the way the human nervous system is thought to work and partly because it’s self-organising.
That’s the key - if we want a machine to think, I think it’s got to be able to learn to do it for itself, just like humans do - we need a machine in which a mind can grow.
Intelligence must include an unlimited ability to learn, not just from past experience, but from the larger universe, as well as complete self-awareness.
But, we are. We are learning all the time. Every unfamilar thing we encounter, every new idea someone presents, is incooperated into our body of knowledge. We may not act on that new information, but we’ve “uploaded” it.
Unless, you were referring to self awareness. If so, I can’t help you, except to say, sit, stay. Good boy!
Second page and no-one’s mentioned that the computer in 2001 was called HAL 9000?
My favourite “amateur’s original idea in chess” story concerns Alekhine’s deathbed confession. St Petersburg 1914, and he’s pestered by a Russian peasant who says he has found a way for White to mate in 12 from the starting position. Unable to get rid of the idiot, Alekhine sets the pieces up and they have at it. Twelve moves later Alekhine is staring at the board white-faced. “Do that again!”. He tries a different defence and is again mated in twelve. He hurries along the hotel corridor and fetches Lasker. Twelve moves later Lasker is mated. And then again. His eyes and Alekhine’s meet across the board…
“And then what did you do?” asks the doctor keeping vigil by the dying man’s bed. “Do?” whispers Alekhine, with his last breath. “Why, we killed him, of course.”
studying opening theory to understand the ideas, not to memorise
studying middlegame tactics to acquire pattern recognition
studying middlegame strategy to enable planning
learning endgame theory (this is the part that does use some memorisation e.g. fundamental rook + pawn positions)
There’s far less memorisation than amateurs think in top-class chess. Blindly playing a set of learnt opening moves is going to run in to an opening innovation, for example.
In almost any position, there will be a number of guidelines, but the key is deciding which is the most important and then analysing the variations.
The reasons why rank amateurs don’t beat experienced players include:
less experience in opening positions (a master will have played many games in a variation and understand the principles)
less knowledge of tactics
less knowledge of strategy
less practice in calculating (resulting in more mistakes and taking more time)
less knowledge of basic endgame wins
I suppose you need to define what you would accept as ‘original thought in chess’.
But certainly the Benko Gambit, for example, (1. d4 Nf6 2. c4 c5 3. d5 b5) was an astonishing development. Black sacrifices a pawn at move 3 for some positional compensation?! Amazing!
The human brain functions very, very differently from a von Neumann computer. It’s massively parallel and asychronous so it’s not really accurate to think of it as executing an algorithm.
Part of the reason that computers can beat humans at chess is that chess is a problem that is very amenable to an algorithmic attack. Computers are very good at doing some simple, stupid thing over and over again millions of times. Humans are lousy at that sort of computation. It takes tremendous discipline for us to think algorithmically and we can’t sustain it for long.
Now, with enough processing power we will probably some day be able to simulate a brain with a von Neumann architecture. In such a machine well-defined algorithms would be used to simulate the neurons themselves, but the actual cognition would not be the result of higher-order algorithms but rather an emergent behavior of the system as a whole.
Don’t need to be a chess master to understand that it involves learning responses to situations as opposed to abstract philosophical thought.
Well then give a chess idea that does not constitute an applied set of rules. You’re right that I’m not a chess master, so I only know some very fundamental things, such as maintaining capturing power over the center squares. This might seem like an abstract idea, but it is actually just making calculations and applying rules. I’m sure others ideas are more subtle, but are there any that involve original abstract thought? I’d like to hear one.
And how is this not “learning strategies that work under various conditions, and implementing those strategies when called for”, as I described it in my earlier post?
I think you rather missed my point. I didn’t say chess strategies are simple, but still, it always involves analyzing the position, calling on a repertoire of leaned strategies, and choosing one of those strategies. There is nothing magical about having many choices with which to respond to a given situation and picking one of them based on available information.
Exactly. Less practice and less knowledge. In other words, less memorization. It’s the algorithms the chess master has developed in his mind, not some intrinsic quality of a brain. IOW, the software, not the hardware.
I can’t define it because I don’t think it exists. I want the people who are espousing the idea to define it and give examples of it. People are saying that humans play chess differently than computers. That’s true, but I don’t see any evidence that this difference is a fundamental qualitative one.
Could you dumb this down for me? I wasn’t aware that there was any evidence that the brain doesn’t work basically as a “flow chart”. My nose itches, the nerves send a signal to my brain, and my brain outputs a “scratch nose” signal. When my nose itches, I don’t jump up and yell “Yahtzee”, unless something got damaged in there. So it’s parallel, and it’s highly complex, but is there evidence that, were we able to analyze it in enough detail, that it wouldn’t boil down to a whole lot of “if x then y” pathways? Maybe operating in parallel, but operating nonetheless?