The Singularity: is it total bull?

Good thing they didn’t do a write erase, reload on you man. That would have been grim.

It actually demonstrates how humans read all sorts of intelligence into quite non-intelligent things. The code for Eliza was quite simple, yet people saw intelligence in it. Weizenbaum was quite offended about how people overstated the importance of his little hack, and even wrote a book about it (which I read decades ago.)

Our trivial comments are often interpreted using the listener’s knowledge of us. That adds a layer of semantics which computers can’t handle yet.

With all due respect, I’m guessing that you have only a superficial knowledge of computers and computer science. First, the universe is fundamentally digital at the quantum level. Second, thanks to Fourier transforms and sampling we can and do represent analog waveforms digitally. That is how your voice gets transmitted when you make a phone call.
While all analog is digital, today all digital is analog. At the speeds and voltages we use today square waves are a thing of the past. Yes we store ones and zeros, but transmitting them from one part of a processor to another gets tricky. Any purely digital bug in a microprocessor gets solved fairly easily - the interesting ones are weird analog flavored bugs.

Fuzzy logic has already been mentioned. Clustering techniques in data mining do things like figuring out how to group batches of data. It is a very analog type of process, since it includes measures of say how green something is, but is done digitally.
How to do it is the tricky part. Implementing it on a computer will be relatively simple. Which is exactly why faster and bigger computers don’t help.

I concede that human motivations are complex and poorly understood. That said, the brain’s reward mechanisms are something we do have at least a basic understanding of. It doesn’t explain why I do everything I do, but it does a fair job of explaining the enjoyment of winning a chess game.

My point is not so much that we know everything about how brains work (which we obviously do not), but rather that there is nothing uniquely magical about human brains, and that by extension there’s nothing uniquely magical about things brains do (consciousness, emotion, etc.).

I meant that strong AI as such might turn out to be fundamentally impossible. Whether such could “surpass” the human mind is a different question.

The more important question is whether it would be relevant or useful. By the time we reach the point of developing “strong” AI, the problems it could help us with will probably already be handled, so it will become little more than an amazing toy.

And we assume that an AI could advance itself in an orderly fashion, but what level of introspection is possible for a program/system? Will it in fact actually be able to understand its own mechanics in a way that it can make sense of, or will the processes be buried in too many layers of abstraction for it to access? Bear in mind that if it makes a little mistake at a low level, even one that does not appear to be a mistake, the effects could theoretically propagate throughout the program/system and compromise its functionality. Maybe we could keep a snapshot on hand to restore the machine to a working state, but maybe not: we have only a vague understanding of how really good AI would work.

Clarke questioned the value of intelligence in basic terms of a survival/propagation asset, and when you look at how humans have employed it, you see a mixed bag. It is easy to claim that it has been a net positive, harder to objectively support that in the big picture. I mean, we have done some tremendously awful shit with our intelligence, much of it unintended. It is not absolutely clear that greater intelligence would make things better.

The Singularity is totally impossible to predict. And we’re all going to live inside of immortal robot bodies. Perfectly straight forward. Reminds me of the theistic equivalent: we can’t know the mind of god. But he loves us. And he doesn’t want you to X.

Wouldn’t we still have a singularity? Assume the best strong AI can hope for is to be as smart as the smartest humans. There are still some extremely smart humans on this planet. What happens when there are trillions of smart AI devices spread over the globe that only cost pennies each that each has the problem solving abilities of the best humans? That could also lead to something akin to a singularity.

Plus part of the singularity isn’t just that AI is smarter than humans, but it is also faster at problem solving and there are more of them as AI gets cheaper. An AI that can perform a year’s worth of human cognition in an hour is going to be productive even if it is only as smart as a smart human.

Seems to me an AI that could do that would be about 8,765 times smarter than a human.

But isn’t the existence of humans evidence, in and of itself, that strong AI is possible? If you accept that what goes on in the human brain is a physical, quantifiable process, how can you then claim that strong AI is ‘impossible’ to create? It has already been created 7 billion+ times over, and unless there is some kind of magical soul or spirit, human intelligence must be describable in physical terms.

You must think very highly of bill counters. :smiley:

Doing something quicker isn’t a gauge of intelligence. It’s a gauge of speed.

Sure it is. If we’re given puzzles or math problems or whatever and you solve them twice as fast as me you’re twice as smart as me. At least as far as those tasks are concerned.

I don’t agree. I’m obviously not a neuroscientist or anything like that, but quality matters deeply on these issues. A year of math from Terence Tao is worth more than 10 undergrad students each putting in a year.

But again, even if AI is limited to human cognition (which I don’t see why it would be) as far as quality of problem solving and intelligence, human level intelligence is still pretty effective at problem solving. A small fraction of humans actually design the world we live in, even if AI is limited to their abilities we will have trillions of them around instead of a few tens of millions of talented humans.

Just because it can be described does not mean it can be mechanically replicated. There’s no theoretical reason why energy-net-positive controlled nuclear fusion should be impossible, AFAIK, but somehow it seems always ten years away.

We really don’t know what a superhuman AI would do. Why would it feel compelled to develop smarter and smarter versions of itself? Maybe it will be totally paranoid and do its best to prevent any other AI’s from being created. We’re talking about a pretty alien intelligence here, and we just have no idea.

My big problem with Kurzweil is that he fails to accept that society is a complex adaptive system and fundamentally unpredictable. We really don’t know what the future is going to look like. You can go back 100 years, 80 years, 60 years, 40 years, and 20 years and find the best ‘futurists’ that existed in those times and read what they have to say about the future - and most of it will be so wrong as to make for humorous reading. Remember when we were supposed to have space colonies by now? And everyone would be tooling around in flying cars?

The real problem is that the major changes in society tend to come from unexpected sources. Nassim Taleb calls them “Black Swan” events. The futurists that lived just before the digital age did not see it coming. The futurists who lived just before powered flight took off had no conception of how we would travel 50 years from their time. Plastics, telecommunications, lasers, nuclear fission, internal combustion engines, jet engines… All of these created inflection points in the curve of future development and led us down paths no one could predict.

And in the other direction, many trends that seemed unstoppable simply stopped. To people who saw us go from the Wright Flyer to landing a man on the moon in their lifetimes, the rate of progress of transportation seemed unstoppable. It seemed like a no-brainer to predict we’d soon have lunar bases and be building starships and space colonies.

To people who lived through the revolution in medicine, it must have seemed obvious that we were going to soon cure all diseases. They wouldn’t have guessed that measles would be making a comeback and hospitals would be facing infection problems because of breakdowns in sterile procedure.

Anyone who says he knows what the future holds for us is either fooling himself or selling something.

Right now there are a number of technologies right on our doorstep that could fundamentally change our lives. 3D printing, driverless cars, nanotechnology, genetics, virtual reality, any number of possible breakthroughs in energy… We can see some of the technologies that will impact our lives in the near future, but we cannot see how society will adapt to them or what new technologies these will enable that no one has even thought of yet.

Remember the Segway.

If a year of math from Terence Tao is worth more than 10 undergrads then he’s smarter than all of them put together.

Correct me if I’m wrong, but isn’t the human brains functioning ultimately a physical, mechanical process? Albeit one that relies on complex interactions of chemicals and neurotransmitters (my ignorance of neuroscience is showing I know).

I disagree. We have barely scratched the surface of how reward mechanisms work. All we know is that secretion of certain chemicals is involved, and what those chemicals do (in limited cases) at the lowest level. We have no idea what the release of chemicals really does on any kind of level that relates to how the brain thinks or experiences.

I’d say that we have good reason to suspect that there’s nothing keeping us from making a machine that’s like a human brain. We certainly don’t know it (though I personally am convinced). When we finally achieve it, there will be good philosophical arguments showing that we can’t really tell whether we’ve achieved it! (Not ones that I’d buy, though.)

Intelligence isn’t multiplicative. There are problems that one person can solve that a million people can’t solve. If by your definition that makes him a million times smarter than those other people, well, I’d say your definition isn’t very useful. There are other problems most of those million people could solve that our genius might not (for one of a whole host of examples showing that the math just doesn’t work out.)

But in some sense, yes, he is smarter than all 10 of them put together. That doesn’t mean that he thinks 10 times faster. In fact, he might think far more slowly, but more effectively for difficult problems. Any of the 10 might be able to add a column of numbers 10 times as fast as Terence.

I think it is specious to suggest that we would actually attempt to replicate human brain function in an AI. Why bother with that when we already have billions of human brains floating around the planet, some of them rather functional? If you have ever participated in writing a complex computer application, you would probably realize that there is a fundamental difference in EDP processes and human thought patterns. It would not make sense to try to force human thinking on machines when using machine methodology would almost certain be not only more efficient but also more enlightening for us (and what makes us happier than to learn new stuff).

Personally, I seriously doubt linear instruction processing is a viable approach to approaching AI. At some point in the near future, we will have to transition to dynamic logic instantiation, where processes become transitory patterns in a logic array rather than a series of coded steps to be run at higher and higher speeds/greater and greater parallelism. The AI-capable computer of the future will have to be incredibly flexible and efficient, especially if it is to survive in a post-oil world.

What we really do not understand about our brains is how they store and retrieve huge amounts of data. We do not seem to have hard-drive-type storage or an comprehensible meta-data organization system (which is to say, its presence is obvious but the actual mechanism not understood). Trying to apply human qualities to a machine when we can barely grasp how they work in humans would be an exercise in frustration, we cannot even really guess what would be best for the machine. What clues would you use to tag symbols so that their inter-relationship is readily discernible, and how would you make that make sense to a machine?

It looks to me like the destination will not be as interesting or worthwhile as the journey. What we learn and solve on the way to achieving AI will be more valuable to us than the actual result itself (cf. the Apollo program).

Check out David Szondy’s Tales of Future Past site!