I'm afraid of the "Singularity"

I know.

OK, you start with a newborn baby, give it human interaction, and the newborn baby learns talk. That is absolutely correct.

Do the same thing with a 57 Chevy, and the 57 Chevy will not learn to talk.

That is because a newborn human baby’s brain already has a speech center that already knows how to learn a language. A baby can learn a language even if the baby is deaf or blind or both.

But we do not have the ability to create the computer equivalent of a human baby’s brain, ready to soak up language and social interaction and input about the physical world. A computer of today is not like a human baby where all it needs is a lot of input. It is a lot more like the 57 Chevy. A 57 Chevy can do a lot of things that a human baby will never be able to do. The baby will never metabolize octane. It will never scream down the highways at 100 miles an hour. But the 57 Chevy does not have an inbuilt module installed in the factory that allows it to learn a natural language merely by observing how human beings use natural language. And neither does any computer or computer system or software algorithm in any research lab anywhere on planet Earth.

I do not believe it will be impossible for computers to learn to process natural language. It is obviously possible for machines to do that, because the human brain is an instance of such a machine. But we don’t have the first clue how to build such a system, and all our attempts have merely illustrated our ignorance about the problem. This is not bad, before we can figure out how to proceed we have to demolish our old incorrect ideas.

But your ideas are straight out of how people used to think about AI back in the 50s and 60s. Computers then were slow and clunky, but if you could build ones that were fast and efficient, and exposed them to the panoply of human knowledge, they’d pretty quickly understand it. But after literally decades of this type of research exactly along the lines you suggest, all researchers nowadays realize that this idea is incorrect. Humans don’t learn language through general intelligence. They have specific structures in their brains to do that. And you can give a person a stroke and damage this part of their brain, and they can walk around and do everything a human being can do, except understand human language, or produce human language, or both.

First you have to build a computer with an inbuilt module with algorithms and processes and architecture that allows it to learn human language, like the human baby already has. We don’t know how to do this, and we now realize we don’t even know where to start to figure it out. This is not a problem, this is good, research that confirms what you already thought to be true is boring and pointless. Research into machine intelligence has revealed that what we used to believe about how human and animal brains worked was completely wrong. That’s the good news, because now we can start over.

But it also means that we’re not on any sort of verge of figuring it out. It means that it is utterly wrong to believe that all we need to do is more of the same but faster and harder. We need a new approach, we need a lot more fundamental research into how human and animal brains really work. And there is plenty of such research, and the more research we do into brains and computers the more we realize how unlike animal brains computers are.

I apologize for coming off as insulting with my style, but the points made are valid.

Unless you can describe how volumes of data will transform into computational capability then I do think you are leaving out the exact step that is what everyone is working on.

I think you can teach a computer “common sense”.

You just can’t do it with volume of data alone.

“common sense” is a computation and you need to be deliberate in your methods to arrive at that computation. You can not get there by throwing tons of data at the wall and hoping the proper computation results.

Even if you are evolving solutions through random tweaks to a neural network or even a program, you are still making deliberate steps towards a specific goal with a specific method.

Why are you so insistent that throwing together lots of data will result in the same capabilities/functions that our brains are able to perform?

Same dog as you, it’s a very interesting topic.

I’m curious, do you have experience with developing AI?

What AI has done really well is develop heuristics. What it has done badly is to turn those heuristics into anything resembling intelligence. The point of view of a lot of AI when I took it was that with enough heuristics you’d get something that thought - exactly what you wrote above. That program has failed miserably.

There seems to have been a lot more interesting work in building world models that lets a program “understand” a story in the sense of giving reasonable answers to questions about the larger context of the story. However this work seems to have sunk out of sight, so maybe it didn’t scale.

I don’t, thank Og. I failed the Lisp test at the MIT AI Lab, and didn’t go down that rathole. I have been very near lots of AI research, and every bit of it crashed and burned or turned into an application with no hint of real AI.
Popularizations of AI miss all the stuff that people have been talking about here, since the idea of AI is very exciting to reporters.
Note that no one here has brought up the various (I think spurious) reasons why AI is supposedly impossible. We are talking practicality and difficulty and how close we are. Which is not very.

Which clearly shows (IMO, at least) that the model for AI dev should not resemble the model for natural intelligence development. I hear too much (a little is too much) about making computers think just like humans: where is the value in that? Computers should think like computers and be able to communicate with humans in a way that humans can deal with, but they should not be carbon (silicon?) copies of us. We stand to learn and gain more from exploring and exploiting the differences.

Right. If we had an algorithm that could predict the behavior of C elegans, a nematode worm with exactly 959 cells, then we’d be getting somewhere. But we cannot simulate the nervous system of an animal that has a nervous system so simple that if it were any simpler it wouldn’t be fair to call it a nervous system.

What I’m talking about is a little bit akin to the “if 100 monkeys typed for millions of years randomly on typewriters, eventually one of them would produce the entire works of Shakespeare” thing. Not exactly, because guided heuristics would be necessary at plenty of steps along the way. However, it’s something like that.

I’m not saying that we’ll have a perfect simulacrum of a human brain by 2030, although maybe we will. In fact, at some point, I definitely think we will, although it’s tough to say when yet.

What I definitely think we will have are computers that are capable of farming/driving/piloting/controlling entire factories/etc…and maybe ones that can even peform surgery totally unassisted. IOW, we will be replacing most of our workforce with technology, and I think it will probably happen (assuming Moore’s Law doesn’t poop out before then) around 2030 or so.

The issue isn’t computers that can socialize with us, although that’s an awesome application. It’s computers that can make us obsolete as a work force. THAT’S a game-changer.

Will AI be any good at moving goal posts?

Anyway…

I definitely think it can be done, I just don’t think that being able to pass any possible Turing test is all that important.

Some people talk about our use of idioms and metaphors as being difficult for computers to understand. Well, not really. There are only a finite number of metaphors/idioms that we use, and it’s not THAT difficult for computers to learn how to deal with those in restricted scenarios. Witness how Watson beat Ken Jennings at Jeopardy.

Metaphors/idioms in completely context-free interactions would be much more difficult, but I still think that, with enough realtime biometric data to check possible meanings against, computers can get reasonably close, given time.

Sarcasm is also an issue, but I think simply teaching the computer about tone of voice would mainly fix that. In text-only interactions it would be much more difficult, I admit that…however, as time passes, we will probably be typing less, and are also, separately, more likely to be so wired in that a large amount of our biometric info, up to and including brain wave activity, will be available if we choose, interaction by interaction, to make it available to our computers.

You aren’t talking about the singularity now. You are talking about the robotic revolution (the next step of the industrial revolution).

The singularity is generally understood to be when AI is actual intelligence that can build on itself at an ever increasing rate. And if/when that happens it will be amazing/scary/a fucking disaster for humans…depending on how it goes down.

You’ve now moved the goal posts to computers are good at doing mundane human stuff, so now we a free to starve/learn french/start rock bands.

And I don’t know how you think biometric data is going to give you great insight on what’s going on in the human mind and how it works. Measuring boners and pupil dilation ain’t gonna tell you how Stephen Hawking figures shit out.

So do you want me to ignore the rude tone in your post, or what?

But getting to the meat of the matter, Stephen Hawking has brain waves, which aren’t that hard to measure. He also has a pulse rate, breathing rate, and blood pressure. Even now it’s starting to be possible to “read” people’s minds by looking at brain activity. However, that’s not really AI, and is irrelevant. Also, your example is a corner case, and you know that.

And I was never talking about AI that is completely independent of humanity in the sense of being completely autonomous and writing its own programming code. While technically that might be possible,

  1. I don’t think it will come up. Humans will be more and more integrated with computers, to the point that it will start to be a little difficult to distinguish them from each other.

  2. It’s irrelevant. There’s very little difference between telling an AI “figure out the Grand Unified Theory” or “figure out how to stop the aging process” or “figure out how to cure cancer” versus the AI simply deciding to do those things itself. The amount of human input required is pretty much the same.

and I don’t think there’s anything left for us after figuring out the Grand Unified Theory.

Evolving a solution is a perfectly legit way to arrive at a solution. But it requires deliberate action and knowledge to guide the solution towards the goal.

I can tell you from experience it’s not a simple process. And with the complexity of the human brain, the number of capabilities that would need to be evolved would still be an enormous amount of complex and clever work.

There just is no free lunch with this problem. Whether you expend energy creating all of the supporting math that can describe consciousness, or you spend an equal amount of energy trying to intelligently guide an evolved solution, you still have to do the hard work.

I’m going to go out on a limb and say “no”.

Oh, I’m aware. Assuming Moore’s Law holds, the work will get done, though, even if Ray Kurzweil gets hit by a bus tomorrow. It’s too broadly useful, and too interesting to computer programmers, for it to not get done.

And you do?

Nope. I’m also not the one talking like I’m an expert 'cause I read some article on the subject while at the same time showing my woeful lack of knowledge.

Actually, I got my information from a discussion with a professor at Rutgers whose area of research is AI. He’s a friend of the family, and also happens to be a Kurzweil detractor. I disagree with him about Kurzweil being wrong though, for reasons I’ve stated above.

Right. So you “have no work experience with developing AI”, which is exactly the line you used to try to discredit someone else’s opinion.