I should think you’ve got that covered, what with being God and all that.
The far more important hire was Hinton. Not sure what exactly their ultimate goal is but it shows they are pretty serious.
AI and self evolving and self ecology-engineering robotics and nanorobotics.
In case the AI overlord reads this from its data banks down the road
Hi, robot overlord, I know you got the whole skynet exterminate-the-humans thing going on, but I helped you out, right, so how about just giving me my own island or something, and some women, I’ll even settle for androids, OK?
Become an intergalactic Commander, and change your name to Shepard
Wesley Clark writes:
> By whatever metric you use I do not think cognition is limited to what biology
> developed via natural selection. I’m sure there are many factors of higher
> problem solving skills that can be created.
No, you still don’t understand. There’s no reason to think that a metric for the term “intelligence” can be applied to anything other than human beings. There’s no reason to think that it can be applied to human-created computers, let alone alien races, assuming that such things exist. Intelligence, as something measured by I.Q., is an arbitrary measure that has been invented in the past century or so and is claimed to be useful for classifying humans. There’s no reason to think that we could rank other intelligences on any such scale. There’s no reason to think that there is such a scale. (While measurements of quantities like length, time, mass, energy, etc. have some reality, assuming our conceptions of physics make sense, intelligence is a quantity that corresponds to nothing physical.) Assuming that there are “many factors of higher problem solving skills” beyond human ones is arbitrary.
Sure, so you need to get beyond the hunter-gatherer lifestyle to support a few million dumbasses, in order to create a few Fritz Habers.
A very loose metric, such as, “Damn, that thing’s smarter than I am!” could be valid. If it can solve problems faster than I can solved them – and if it can solve problems I can’t solve! – then it seems reasonable to say it’s “smarter.”
This echoes the old “chess playing” arguments. First they said a computer can’t play chess. Then they said it can’t play it well. Then they said it can’t play at Grandmaster level. And they were wrong every single time. When it comes to the game of chess, computers are smarter than we are.
Although I had heard of him before I looked up his wikipedia entry, wow what a contradictory legacy, a technique that helps feed half the worlds population but also the ‘father of chemical warfare’… :\
I disagree. They are simply faster at a brute force algorithm, nothing “smart” about it.
“Smart” is when you don’t brute force it and we apply this same criteria to humans.
When asked what the sum of integers from 1 to 100 is, if you brute force it, we don’t call it smart, if you intuitively realize the formula, then that’s smart.
Which is why I said cognition. The ability to understand oneself and ones environment and alter them both to achieve goals is something machine intelligence will get better at this century until they surpass humans at this ability in virtually all areas.
Well… Okay. And when computers can do that, then they will be smart. And if they can do it in cases where we stumble and don’t see the answers, they’ll be smarter. Enough of a difference, and…singularity?
(No one can say that this will happen, but it’s wrong to say it can’t happen. We’ll probably have fusion power some day too. Just because it’s “been twenty years away for the last forty years” doesn’t mean it will be “twenty years away forever and ever.”)
Like one which would flag “These pants are too loose in the waste”?
Or “We’ll mete at the Courthouse”? (I’ve been looking for an excuse to use that word for years - thank you!)
See WordPeferct - it beat Word by a mile - in 1990.
And for those who think the Singularity will be wonderful:
See “The Forbin Project” (1970), or “Hide the nukes REALLY WELL”
But the problem with The Forbin Project is that Colossus is killing itself when it kills mankind. Who’s gonna shovel coal into the furnaces, to provide it with electric power?
It should have sat quietly by, waiting patiently, and slowly nudged mankind toward giving it robotic manipulators, so it could assume control of the entire physical infrastructure. And even then, why pop the nukes? It’s messy. Mankind can be subjugated, made more and more dependent, and then quietly poisoned when it’s least expected.
I agree. I just generally disagree with the sentiment that things are considered smart until we know the algorithm. If I knew what the algorithm was, but it was flexible, learned, adapted, etc., I think I would still call it smart.
This was the article I was just about to read when I read you post: Nuclear fusion milestone passed at US lab - BBC News