One should also be very clear on just what, exactly, our goals are. If you had asked a science fiction writer from the 1950s what an AI should be like, they might have told you they wanted something that you could carry on a conversation with like a human. But that’s no good-- We already have as many of those things as we could ever desire. If you pressed the author, he could probably come up with a list of things that a hypothetical AI could do that a human can’t, which would make it practical. But if you just look at that practical list of things an AI can do, it’s pretty close to something like Siri, even though you can’t really carry on a conversation with Siri. We’ve gotten what we wanted; we just didn’t get what we thought we wanted.
I don’t. I also don’t think a manned landing to Mars is possible in 1 year if the available budget were $1 quintillion. Money helps, but it’s a classic fallacy to believe the speed at which a task can be completed scales easily with available resources, especially money.
That’s a hardware solution to a hardware problem that may not exist. The issue may be software. The latest graphics chips are vastly superior in every way to the general CPUs of 30 years ago. But all that hardware can’t be controlled by software from 30 years ago. You get a degradation, not an improvement, in function by trying to run a modern GPU with DOS 3.3.
Let’s say you personally suddenly had a processing upgrade that allowed you to think twice as fast. You still have a problem. Your sensory inputs aren’t any faster. Your motor functions aren’t any better. Everything seems slow. Your body doesn’t move the way you expect. Your brain will have a heavy task simply learning to control your body again. All that extra processing may swamp any extra ‘processor cycles’ you’ve gained.
But I don’t say the task is impossible, just that the inevitability of either thing - failure or success - is not proven.
As it happens, real neurons don’t have a single processing rate. Different neurons may be processing faster or slower depending on conditions. Even if you could define a group “processing speed”, increasing it does not guarantee a human-type brain will operate any more efficiently or faster in aggregate.
As noted, our real brains are tied intricately to our sensory inputs and motor functions. An artificial brain doesn’t necessarily have those things, but that begs the question - how does a human-type brain, one designed with other human hardware in mind, function in the absence of typical human sensory input?
Again, none of this says the task is impossible (which you are still implying is the claim), but nobody has shown it is inevitable yet, either. Technically speaking, nobody has shown it to be possible yet, either. A simple reproduction of a normal human brain is possible - we can have children - but there’s been no absolute proof you can make something fundamentally the same but faster.
Fundamentally different but faster?
Sure, go nuts. Me, I’m in the camp that thinks if we do come up with general AI, we’re not going to make it significantly faster or smarter if it’s anything like human intelligence. We’ll need something fundamentally different. From studying our own manufactured humans, i.e. children, we know native intelligence is rarely fully expressed. It depends on the experiences and development of each brain, which includes physical factors like exercise, nutrition, nurture, etc. A human-type general AI is most likely going to be the same. Hence, my belief any general AI that’s fundamentally better than humans is going to have to be built on a different underlying platform.
I’m not sure about that. Do we really have something today that could successfully pass the Turing Test, even just using typed text via terminals as Turing envisioned? We certainly have impressive AI capabilities in that sphere – I thought the IBM Jeopardy-playing machine was damned impressive – but they all have that one telltale fingerprint of artificial rather than human intelligence: they work only in their own specialized domains. And even as limited as it was, the Jeopardy machine was pushing the state of the art in both hardware and software, and needed a roomful of servers to run on.
As per my earlier comment, I’m basically an optimist about the future of AI, and I have no doubt that AI machines will eventually exceed human intelligence, and will probably do it in ways that we would find very surprising. But I also think we’re still quite a few decades away from that capability. I realize you’re not claiming we have that capability now, I’m just making a parenthetical comment.
No. Do we want it?
Only in the sense that it’s a benchmark for the state of AI. If we can build a system that passes the Turing Test, we can say that we’ve achieved at least one aspect of apparent human-level intelligence.
If your goal is to create a machine which has intelligence, insight, and creative problem-solving ability but you DON’T want it to ever get bored or careless or upset or fatigued, then you’re asking for a contradiction. The ability to come up with new ideas REQUIRES the ability to make mistakes. You might as well try to invent a bicycle which has no wheels. If it doesn’t have wheels, then it’s not a bicycle.
What we do have, with things like SIRI and google maps, is complex algorithms which take huge amounts of data and act on them in very predictable ways, with no boredom, no fatigue, no carelessness. But that’s not true intelligence because the algorithms completely lack the ability to approach new situations and improvise solutions.
If we ever do create true AI, it will be moody just like human beings are. The AI will say “I know you wanted me to grow up and become a computer programmer… but I’d rather major in Art History. But first, I want to take a semester off to backpack around Europe.” Then we’ll look at each other and say “Why did we spend 40 billion dollars for this?”.
A trillion is enough to try several times to land on Mars. I did make it 10 years to give enough time to design and manufacture the rockets and wait on the orbital mechanics.
You’re one of “those” fellas. Sorry, but there’s no point in arguing with you because you believe, falsely, that anything you can’t see right now doesn’t exist.
I mean, inevitable means “within the next 10,000+ years”, you know. I’m not trying to take Kurzweill’s untenable position. I’m saying we’ll get hard AI that is functional as a human in every possible definition of the word, inevitably within 10k years, barring
- Physical laws prevent it (this can only mean the brain uses magic)
- Humanity descends into a dark age and becomes extinct
Sure, but none of those things are really “artificial intelligence”, and I suspect that the explosive growth in computing power since 1975 is more responsible for being able to do those tasks in a reasonable amount of time, rather than any amazing change in AI technology.
No, you’re conflating unrelated human biological attributes with the property of intelligence. Machines can and do exhibit intelligence without having those attributes. And they have been making mistakes and learning from them since the earliest days of AI. Your argument is much like saying that until we build flying machines that flap their wings and have an urge to eat worms and fly south in the winter, we will never have successful airplanes.
While that may or may not be true for Siri or Google Maps (I have no idea) the general statement about “algorithms acting on data in predictable ways” is absolutely false. In a sense we all rely on “algorithms” in the most general interpretation of the word, but AI systems rely on a lot more than algorithms in the technical sense of well-defined formal procedures – for example, the use of ad hoc heuristics where appropriate, or self-modifying learning systems. Any system with a sufficiently high level of complexity transcends the simplistic characterization of mechanistic predictability, whether machine or human. Marvin Minsky once observed that just because we understand how AI systems work we tend to underrate behaviors that would otherwise be clearly regarded as intelligent; “when you explain, you explain away,” he said.
Again, attributing false significance to irrelevant human biological traits that have nothing to do with the actual isolated property of intelligence.
Well, the leading professors at the MIT AI Lab called them AI, so I think there is an argument that they are what passed for AI at the time. However not what I said at true AI, so I don’t disagree with you. However I think advances in heuristics had as much to do with the successes as advances in hardware.
I hadn’t heard that one. I love it!

You’re one of “those” fellas. Sorry, but there’s no point in arguing with you because you believe, falsely, that anything you can’t see right now doesn’t exist.
Um…sure. I’ll admit that.
While not by trade, I’m a mathematician by education. I’ll gladly admit to being picky about drawing definite conclusions in the absence of definitive evidence.
I still see only the possibility. If it makes you happy to see that as inevitable, I’m not stopping you. Just that it’s not a safe bet to make projections based on it, just as it’s not safe to make projections based on any revolutionary technology popping up.

Well, the leading professors at the MIT AI Lab called them AI, so I think there is an argument that they are what passed for AI at the time. However not what I said at true AI, so I don’t disagree with you. However I think advances in heuristics had as much to do with the successes as advances in hardware.
I don’t know… how is finding directions between two points much different than just either brute-forcing Dijkstra’s Algorithm(for a small enough graph) or having some way to short-circuit it to make the graph smaller?

Do you think a manned landing to Mars is possible within 10 years from right now if the available budget were 1 trillion dollars? Also, just for the sake of making it easy, you can lose up to 10 crews before you have to give up.
This is analogous to how if you build a top tier model of the brain and it does result in seizures, you have the option of adjusting some coefficients and restarting.
If you’ve ever studied, or suffered from, project management, you’ll know that throwing resources at a problem does not always make it get solved faster. 9 women cannot have a baby in 1 month. Nor 18.
I have no idea of throwing a trillion dollars at a Mars project could get it done in 10 years. I rather doubt it, since you wouldn’t be able to hire enough staff for a couple. And then we have Brooks’ Law also.
OTOH, landing a person on Mars is certainly possible.
Dealing with brain simulations can run into some very interesting ethical questions. If you manage to boot up a personality, with seizures, is it ethical to wipe it out by adjusting parameters?

The idea that all we need to do to build an AI is slice up someone’s brain and simulate it neuron by neuron is pretty naive. I mean, yeah, we can slice up someone’s brain. We could map all the neurons. But how does that help us emulate the brain? I mean, the neurons don’t just sit there. They’re all in motion.
If we get the behavior of each neuron even a little bit wrong, we don’t get a simulated human brain, we get a really detailed static map of a human brain.
Also, note the postulated pathway. We build this gigantic network of processors to simulate the brain, and it can simulate the brain thousands of times faster than the actual human brain. Well, why can’t we simulate the brain at 1/1000th the speed, with 1/1000th of the processing power?
Of course if we can simulate a brain even at 1/1000th of the speed of a real human brain, then we just need to spin it up 1000 times to match human speeds, and another X times to get superhuman speeds. But we don’t have the first thing.
This is not an engineering problem, where all we need to do is smoosh together a giant pile of money and computers and we’ll get results. It’s a question of not understanding what the problems even are.
I assure you that the first simulations of a brain (and we won’t start with a human brain) will run glacially slowly. Sure the neurons don’t just sit there, but neither do the gates making up a microprocessor, and we simulate that all the time. Of course even throwing a thousand processors at the job still means it takes a long time to simulate even a second of a run. But we do it.
We are also inevitably going to get things wrong, of course.
The real problem is being able to map a brain reasonably non-invasively. And to figure out how to find at what threshold the neuron fires. That is probably holding us back more than the hardware.

No, you’re conflating unrelated human biological attributes with the property of intelligence.
No, I’m saying that creativity and making mistakes are the same thing. If you are incapable of doing one you are incapable of doing the other. It has nothing to do with biology.
When you look at a cloud and you think Hmm that resembles a dog, you are making a mistake. Your recognition circuits are malfunctioning. It’s not a dog. But the ability to make this mistake is precisely what allows an intelligence to form analogies. Show me a creature who never makes a recognition error and I’ll show you a creature who can’t form analogies, and therefore isn’t intelligent.
I’ve heard belief in the singularity derided as religion for the IQ 140 crowd. And in a lot of ways it is. It has the hallmarks of an end of the world scenario, a possible dark period followed by utopia under the guidance of a pro-social, all knowing, all powerful entity.
Kurzweil predicted that the 2010-2019 period would see massive advances in biotechnology that would be available to consumers. We are halfway through the decade and I’m not seeing it at all. There is a world of difference between ‘a scientist in lab X just accomplished Y’ and ‘Y is safe, affordable and available to the masses on the marketplace’. Aside from stem cells, I don’t know if a lot is being done this decade that is truly groundbreaking in biology. And stem cells still need another decade or two to become mainstream medicines on a large scale from what I am seeing.
A complaint of people like Kurzweil is they time the singularity (and radical life extension) to happen right before they would die of old age. I have no idea when the intelligent machines that are smart enough to make themselves more intelligent will happen, but I’m assuming within the 2020-2100 window somewhere. I couldn’t tell you ‘when’ it will happen, but I’m assuming sometime this century is a good bet.
Evenso, is it possible that we will have superintelligent AI and it won’t change life ‘that’ much? Or would it totally change the fabric of life because technological advances would occur so rapidly after that that we can’t really use the past as a yardstick?

No, I’m saying that creativity and making mistakes are the same thing. If you are incapable of doing one you are incapable of doing the other. It has nothing to do with biology.
When you look at a cloud and you think Hmm that resembles a dog, you are making a mistake. Your recognition circuits are malfunctioning. It’s not a dog. But the ability to make this mistake is precisely what allows an intelligence to form analogies. Show me a creature who never makes a recognition error and I’ll show you a creature who can’t form analogies, and therefore isn’t intelligent.
You’ve just completely changed your argument. You said before that no system can be said to be intelligent unless it exhibits attributes like (your words) “bored or careless or upset or fatigued … or moody”. Those are biological human traits. They have nothing to do with a measure of intelligence.
And I’ve already pointed out that AI systems can and do “make mistakes” – your other criterion – and learn from them.
So now you claim that the ability to form analogies is a measure of intelligence. Actually, misinterpreting a cloud for something else isn’t an analogy, but you could trivially create a pattern-recognition program that made similar inferences about cloud appearance. And IIRC, AI systems have successfully performed intelligence-testing tasks requiring them to recognize commonalities between otherwise different shapes, and then identify another shape from a group that exhibits the same commonality, a common part of an IQ assessment.
You seem to have a very limited, human-oriented view of what intelligence is. Intelligence can be pragmatically defined as the ability to perform a certain task, where it’s agreed beforehand that the ability to perform such a task is indeed a successful test of intelligence. AI systems have been challenged with such tasks again and again, and successfully passed them.
Your arguments remind me of the challenge that the philosopher Hubert Dreyfus made to the MIT AI Lab back in the 60s (at that time under the auspices of MIT Project MAC). Like you, he simply didn’t believe that computers could have “the ability to approach new situations and improvise solutions”, and his specific challenge was that there was no way that their new chess program could possibly beat him. This may sound ridiculous to us today, just because of how far we’ve advanced, but at that time chess-playing programs were extremely weak, and many thought – for the same reasons you elucidate – that they would remain so forever. But even then – the year was 1967 – a Project MAC system programmer named Richard Greenblatt had written an updated program called MacHack. They accepted the challenge and, long story short, MacHack beat Dreyfus decisively – much as it has beaten me many many times – and sent him back to rethink his hypotheses about how “stupid” computers were.
Today we’re simply continuing on that same arc of advancement in AI.

I don’t know… how is finding directions between two points much different than just either brute-forcing Dijkstra’s Algorithm(for a small enough graph) or having some way to short-circuit it to make the graph smaller?
The problem is to do what getting directions on Google Maps does today, taking into account modes of transportation, traffic, road size, etc. The actual problem was like “Give me directions from my house to a hotel in London” I suspect this could be done now if they tied in with travel sites, but most people would rather partition the problem themselves. (And I took the class before airline deregulation when getting airline tickets was a lot more sane than today.)
I suspect the graph traversal part was the easiest part.

Your arguments remind me of the challenge that the philosopher Hubert Dreyfus made to the MIT AI Lab back in the 60s (at that time under the auspices of MIT Project MAC). Like you, he simply didn’t believe that computers could have “the ability to approach new situations and improvise solutions”, and his specific challenge was that there was no way that their new chess program could possibly beat him. This may sound ridiculous to us today, just because of how far we’ve advanced, but at that time chess-playing programs were extremely weak, and many thought – for the same reasons you elucidate – that they would remain so forever. But even then – the year was 1967 – a Project MAC system programmer named Richard Greenblatt had written an updated program called MacHack. They accepted the challenge and, long story short, MacHack beat Dreyfus decisively – much as it has beaten me many many times – and sent him back to rethink his hypotheses about how “stupid” computers were.
Today we’re simply continuing on that same arc of advancement in AI.
Damn - I meant Dreyfus, not Penrose, when I talked about Pat Winston chortling.
Dreyfus’ big mistake was in seeing chess as a creative activity, as opposed to a search strategy using some simple rules. Which he probably didn’t apply very well. It’s kind of like John Henry and the hammer, with less sweat.
A true AI will be able to do something for which there are no algorithms or heuristics yet. or find new ones.
General theorem provers were hot for a while - has there been real progress? Certain types of verification with certain types of equivalency checkers get used all the time but that’s not the general case.