Just using symmetry to prove the equality of the base angles of an isosceles triangle is at least as old as the third century, and I’d actually argue that Euclids proof is a lot more creative, albeit also a lot less elegant. Indeed, I’d be kinda blown away if a computer tasked with proving the equality of the two angles came up with the fairly convoluted “Bridge of Fools” (which I learned as the “Asses Bridge”, your geometry teacher was apparently a lot more proper then mine), or if it missed the far more obvious and simpler proof.
The Elements are actually a pretty good example of something that a human would come up with that a machine intelligence wouldn’t. Despite its reputation, its not particularly rigorous, and a modern reader can come up with half a dozen uses of unstated assumptions in the first theorem alone. A computer would probably come up with something a lot closer to Hilbert’s modern axiomization of geometry, and wouldn’t come close to Euclids combination of rigorous logic, reliance on spatial intuition and proofs chosen for aesthetic concerns or a sense of plausibility rather then simply elegance or simplicity.
Maybe, but I’m not sure it matters or not. Creating a AI that could be creative would be neat, but its not really obvious it would spurn ever greater innovation. After all, the planet is currently host to some 6 billion creative intelligences.
My point was that the problem with AI has nothing at all to do with the availability of powerful enough hardware. There are some things, like simulations, which scale nicely, but AI will take some sort of conceptual breakthrough.
No one working in the field uses nanotechnology to describe this, but be my guest.
The technological singularity doesn’t specifically refer to nanotech or AI or any specific form of technology IIRC. It refers to a point in the future after which it is impossible to extrapolate from our existing knowledge.
Take, for example, the perspective of futurists from a little over 300 years ago, before the practical widespread use of the steam engine. They might be able to extrapolate the future of sailing based on gradually increasing sail sizes or modifying hull shapes. But at some point, their imaginations would have reached the point beyond which further advancement in naval technology would be impossible without the steam engine and more advanced materials. Their singularity would be some sort of theoretical machine that could operate without wind and with much greater power.
My opinion is that the first AI per se will come from a simulation of a brain, which should be possible with just a bit more development of networked multi-processors and understanding of brain structure. But we can do that without understanding or actually creating intelligence. Nothing in these links, at least the parts I glanced through, looked like something that will provide AI.
Perhaps you don’t understand how 20 nanometer process technology works. I’m not disputing the existence of nanotech, or that it will be very useful - just that what we are doing now isn’t it. I assure you that the 28 nm node involves neither nanotubes or placing stuff - it is not that different from bigger nodes.
Look up and see how far BIO Technology has advanced since the 1900s…i mean we have rat brain cells in a petri dish learning to fly a flight simulator. Most of our fruits are genetically modifed. We are only at the early stages of bio-tech
The first Chapter of On Intelligence by Jeff Hawkins refers to AI and the reasons why in the past we failed to make progress. The reason why I take the new theories seriously is that in both the simulation and natural front his theories are showing results, programs based on the theories are giving researchers uncanny results.
Uncanny results? We’ve had uncanny results for 50 years now, but are no closer to a theory. Now, the article on the brain is interesting, and, being a pure physicalist, have no doubt we will understand the brain well enough to simulate it - and thus create an intelligence resident in it - within some number of years. I don’t think strong AI is impossible - just a hell of a lot harder than people think it is. I think we are at the stage the alchemists doing their transmutation experiments were. Their goal was achievable, but they had no clue how to achieve it. The first paper mentions neural nets at the end - that was yet another way of doing AI stuff, which is somewhat useful today (in data mining, for instance) but not the salvation of the field.
No one in the field calls 28 nm fabrication and design processes nanotechnology. If you want to say they are all wrong, be my guest. The research you linked to is nanotechnology, but has nothing to do with silicon processes today. And I mentioned MEMS, which if not strictly nanotech are a step towards it. They are certainly being sold today, in your airbag and in your iPhone.
Uh Jeff Hawkins main point is his theory of the brain.
IIRC Hawkins also makes the point that data mining was indeed a “dead end” in the field.
Ok, I have to say that I was confused, indeed, the process you are mentioning is not usually referred as Nanotech, but the definition tells me it is not out of base to call it that. As for the last quote, first you were saying that what I linked to was not Nanotech. No biggie…
It would be a good point, If we lived in the Asimov universe. Unfortunately (or fortunately as YMMV) the laws are mostly talked about, seems that most of the current efforts are still concentrated on creating AIs that are specific to a situation, the three laws are not applicable… yet.
In the consumer front I see no choice but to find a way to encode the laws to the future robots (in this litigious society I foresee that the nanosecond a robot is programed to commit a crime that new federal rules will be made to make sure that the laws of robotics are enforced), but the military will not have that inconvenience.
I can well believe it. The data mining book I use has a chapter on it, and an analysis tool has a neural net option, but I’ve never made use of it. And so it goes with AI methods - for the most part, non-AI ways of doing things work better. My old boss was enamored of expert systems, until he tried to write one for mushrooms (he was an amateur but published mycologist.) He quickly became disenchanted.
Given that Asimov wrote two books worth of stories and several novels about the nuances of the 3 laws, I suspect that if encoded somehow they would be easy to get around. Plus there would be a major performance hit, as every action a robot took would have to be sent through a 3 laws filter.
Asimov either said or implied (would have to check references) that the three laws were somehow intrinsically embedded in the logic structures that made a robot sentient. That the AI simply wouldn’t work in a manner that would contradict them. Of course Asimov handwaved the whole subject of how you program an artificial intelligence to begin with; he simply presumed that some extension of mathematical logic, carried out by machine and scaled up would somehow yield intelligence. Quite a few pioneers in computer science evidently thought so too, until it became clear that general intelligence (or “common sense”) wasn’t simply a matter of giving a computer an extensive enough list of rules to follow. A few Platonists like Roger Penrose claim that intelligence isn’t even computable in the algorithmic sense of the word.
My understanding is that it starts when we are able to emulate and replicate human cognition (problem solving, g factor, fluid intelligence, pattern recognition, etc) in a machine and take those talents to the nth level.
As to why that wouldn’t happen, I don’t know. The advances may be slow, but I don’t see why we wouldn’t get there.
Even if the best artificial intelligence can hope for is to be as smart as the smartest humans who ever lived, that is still going to result in endless advances. A world with trillions of AI, each with the cognitive capabilities of nobel prize winners, fields medalists, MIT professors, world renowned inventors, etc. is going to result in massive scientific advances.
But creativity and intelligence likely extends far beyond what humans are innately capable of.
What I take into account is that there are other common sense ways to apply the laws, as even Asimov noticed before, the laws actually apply to most tools humans use.
This is why robots for consumer use will not become taller than the average human and it is very unlikely that they would be made too strong or capable of using weapons. Thanks to pattern recognition this is not impossible to do.
And here is why I wonder why many that are interested in this subject are not aware of Jeff Hawkins.
If one does not want to see the video one can read the transcript by clicking on “Open interactive transcript »”
In that talk Hawkins does mention how wrong early AI researchers were (including the issue of why trying to use data mining and other brute force tools were dead ends)
What is notorious to me is that even before he showed up I had proposed that AI research is or was affected by the prejudices present in many researchers.
But that is exactly my point. By assuming that AI won’t have creativity you are making baseless guesses from an emotional position that human sentience is somehow ‘special’. Unless you believe in ‘spirits’ or ‘souls’ or any of that tosh then it’s a position that cannot be justified.
That doesn’t eliminate it as a possibility, though. I like to think that the AI was, if not fucking retarded, at least bloody mental. Of course that’s a dumb energy source. They didn’t do it to harvest humanity, they did it to fuck with people, because they hated people. BUt they couldn’t admit that to themselves, so they came up with the FR “energy source” idea.
A couple days ago I heard on the radio a story about how Syria (I think) has blocked the use of blackberries, a fact people discovered when service was interrupted, and that it was done because the encryption of blackberries impaired anti-terrorist surveillance. And I got to thinking how a 19th century person would be terrified by hearing that news story, how it would sound like some Russian absurdist playwright’s dystopian vision come to life.
I was describing the difficulty of actually implementing the three laws in a real AI. Asimov was writing only about five years after Shannon invented digital logic, and he wasn’t a computer scientist or logician, so he can be excused for getting it wrong. What he got right was looking at the implications of robots. That’s why we read the stories today, and almost no one reads the Adam Link stories. BTW, IIRC Campbell actually wrote down the three laws first, but he said that they were implicit in Asimov’s story.
Still, let’s not confuse technology which works for fiction with that which works in the real world.