I'm afraid of the "Singularity"

And your suspicion is what people used to believe about machine learning. Just throw more input at the system, increase the number of cycles, and pretty soon things would start to happen. And if hundreds of times the input didn’t seem to do the trick, then thousands or millions might.

Except that approach doesn’t work. Brains don’t work the way computers work. I’m not saying we’ll never build a machine that works like a brain, or that a brain could never be simulated on a computer. A brain is made out of ordinary matter arranged in ordinary ways, so it seems pretty likely that eventually we’ll figure it out.

But we are not on the verge of any sort of breakthrough, especially not the “if we only had 10 times the speed and 1000 times the input” sort. What exactly do you think Google and Facebook and so on are trying to do? But just analyzing billions of emails and postings and chat channels and expecting patterns to emerge out of the ether doesn’t work. Eliza has been around for 40 years, natural language analysis has made enormous strides since then, we have computers thousands of times more powerful and inputs millions of times greater. And we are no closer now than we were then. More and more of the same isn’t going to work.

And you know this because…?

Look, the only real difference between human intelligence and chimpanzee intelligence is memory and thinking speed. All the things that make us “unique” are really just an issue of being incrementally smarter. This goes on down the scale all the way to the dumbest creatures that still have brains.

It follows, then, that a fast-enough computer, with enough input from interacting with enough people, would eventually be able to pass the Turing test.

The Wright brothers flew in 1903. We landed on the Moon 66 years later. Just because our current attempts fail doesn’t mean that all future attempts will fail.

That is exactly how it went. :slight_smile:

Humans are not “incrementally smarter” than chimps, we just have some slightly different hardware and firmware. The most critical difference is language, which allows us to deal with abstractions like “yesterday” and “three”. Some animals can be taught language and some degree of abstraction with effort, but humans acquire these things from each other effortlessly. This is our only real advantage.

Wiring is the important thing. Human learning seems to have a physical impact on our neural structure. It is not a matter of acquiring information and correlating it. If the program itself cannot extend beyond its coded factors, all it can do is massive correlation. Real AI needs to have some mechanism of modifying its inherent coding safely, and this bears directly on AI development. Right now, the process of programming is becoming gradually more abstracted from the “metal”, as it progresses, the computer itself will become an active participant, which will accelerate development. At some point, we will be able to ask a computer to do a thing it has no precedent for and it will be able to figure out what we want. That will be close to real AI. Real AI will be when a device contributes to a process without a direct request.

I am not convinced that the Turing test is a worthwhile benchmark. Speed may not even be the issue, sequential instruction processing is very inefficient. When the underlying hardware is designed to employ dynamic logic circuitry in place of CPUs, speed will become a non-factor, replaced by breadth of concurrency.

Ask Newt how that moon base is coming along.

I agree that human brains are only incrementally smarter than chimpanzee brains. It therefore follows that a chimpanzee evolved a larger brain, it would be of a similar order of intelligence as a human being. With the reservation that it wouldn’t be exactly like a human brain, and that human brains have certain modules that are much larger and complex than chimp brains (like our language centers), but most other modules are pretty much exactly the same as a chimp brain.

But just because the human brain is an incremental change from a chimpanzee brain, it doesn’t follow that incremental improvements in computer processing and memory will inevitably lead to some sort of machine intelligence. This was a very common assumption back in the 60s and 70s and 80s, and has now been proven to be completely false. We have computers that are thousands of times faster with thousands of times the memory and algorithms that are vastly more complex. And we are no closer to machine intelligence than we were in the 60s. Incremental improvements show no signs of leading to a breakthrough.

After all, a processor that runs at 1/10th the speed can solve exactly the same problems as a processor that is 10 times faster, it just takes 10 times as long. Well, we’ve been doing this sort of thing for the last 3 decades, and we’re nowhere. It’s easy to imagine a computer that could pass the Turing Test, but it was so complex that it took an hour to understand and reply to a conversation that would take a human only a minute. In that case, all you need to do is get 60 times the processing power and you’ve got a computer that can respond in real time. That’s where you can see that an incremental improvement will yield results.

But there are no systems that could pass a Turing Test, if only they could run at a faster speed. There are no systems that can read a book and understand the book, only it takes them 10,000 hours instead of 10 hours.

And so incremental improvements on existing computer hardware will not lead to machine intelligence. It will not happen. It will take some sort of breakthrough in computing for this to happen. I believe that such a breakthrough is very likely to happen someday. Can I predict when this breakthrough will happen? No I cannot, because it requires by definition some sort of unpredictable breakthrough.

Your magical step is “learn cumulatively”

If by “learn” you mean simply store all the words, phrases and responses it encounters, then no, you are engaging in magical thinking.

If by “learn” you mean some method of duplicating the type of understanding that we humans have when we say those words then sure you might be right but that type of “learn” is the EXACT freaking problem that is so difficult that speed doesn’t help it at all and that researchers have been working on for a long time.

I think math is the bottleneck more than computing hardware.

What kind of math? Computers are pretty good at all types nowadays. When I took Pat Winston’s AI class one of the projects was teaching a computer how to integrate. We have that now. I think the organization of information, especially the connections between things, is going to be far more important.

And they may fool the gullible. But Weizenbaum’s original Eliza program fooled his secretary. If it were as easy as you say it is, AI researchers would be doing this even as we speak. It isn’t.

I’d expect that the autistic person would snap right back when confronted with input not in his training.

Even back in the early '70s people like Minsky were looking at ways of representing data (like frames) and not expecting that faster computers would do any good. Have a cite for any respected AI researcher saying that faster machines were all that is needed? We used an AI book back from 1959, and I think maybe that had this assertion - but it has been proven wrong since then.

We know that there is a specific mutation which gave us speech. I don’t think the question of whether speech led to intelligence is resolved, but that is not something adding brain cells to a chimp is going to make happen.

The top of the S curve is not flat - it is just not nearly as steep as the middle. If you think chemical rockets are going to get us to the stars, you need to read some more. If we do travel easily to the planets, or even the stars, we are going to have to jump on an entirely new S-curve.

I’m thinking of the math the human’s must do to properly understand and model the problem.

For machine learning there has been some progress and progress in math always seems to precede the computer implementations. For consciousness, I assume there will be an underlying mathematical model that gets worked out before any successful computer implementation.

There is a kind of strange convergence to consider here. About half the monetary wealth in the world exists as magnetics domains on various hard-drives, and computers are (ab-?)used to extract profit from trading in millisecond micro-trades. So, what if financial computer software becomes complex enough that it will unexpectedly determine the optimal distribution of monetary wealth and act to effect that? And on the basis of what parameters will it act?

When capital is measured to eight or more decimal places leftward, it becomes not so much a simple medium of trade as a means of exerting power and influence. The only real use that billionaires have for their immense wealth is to manipulate the system and influence the actions and lives of other people. So, would market-management computers with complex software be likely to amplify the effect of immense wealth to the benefit of tycoons and power brokers, or would they tend to mitigate it?

In the end, it will be the nerds and boffins who hold the keys to power.

That’s unnecessarily insulting. You can argue your case without accusing me of what amounts to religious belief.

I’m not necessarily saying that you can teach a computer true “common sense”. I AM saying that I think that, with enough input, a computer can give a pretty good simulacrum of “common sense”.

Why are you so invested in this not being possible? It sounds like you have a dog in this fight.

Well, explain HOW that is going to happen besides the Tim Taylor solution of using more power.

More power generally only solves problems where it is already obvious that the only (likely) problem is a lack of power.

My pop culture science impression of current AI knowledge is they aren’t even particularly sure what the actual problem(s) IS.

What he is saying is that more input does not make a computer more intelligent, it just enlarges its database. In fact, I have known not a few people who had encyclopedic knowledge on a broad range of stuff, but their intellect was capable of little more than retrieving that information (as in the cliché “all education and no brains”). Acquiring information is not the same thing as learning.

Of course you need some basic heuristics, plus a LOT more power. There’s not going to be a shortcut, I don’t think. It will require a basic set of heuristics to start off with, and then LOTS of either

  1. direct interaction with humans, or

  2. observation of human interactions

I imagine that observation of things like pupil dilation, facial expression, eye movement, body temperature, etc. would help, at least to help establish basic parameters for the interaction.

I don’t even know how to respond to that last post.

Because you have no work experience with developing AI.

How do you think animals and babies learn? They have heuristics, and they observe facial expression, posture, etc… Heuristics are common to all intelligent/semi-intelligent living things. Heuristics plus observation is how everything with a brain learns.

Ya got me.