The Singularity: is it total bull?

I’m sure people will try it both ways, but simulating the brain has certain advantages. First, you can start small by simulating less complex brains. People are already working on this, and it is a lot easier to do.
Brain simulators are going to run a lot slower than real brains, but that is okay since logic simulators today run a lot slower than the actual logic. They have the big advantage of being able to allow observation of internal states in a way impossible (and immoral) when applied to real brains.
Yes, there is a difference between brains and EDP, but all the code I’ve worked on is also very different from EDP. A brain simulator has to model lots of neurons firing in parallel, but logic simulators have to model lots of gates firing in parallel. That we know how to do pretty well.

Do you mean FPGAs? The problem with them is that reconfiguring takes forever, and they aren’t very efficient in terms of logic density. I believe there have been some FPGA implementations of chess machines, but plain old computers ramp faster and are easier to use. You’d only go to FPGAs when you understood the solution pretty well - and I suspect that the number of chips you’d need would make ASICs a better solution.

I’d be more confident in AI if we hadn’t been wandering around blindly for over 50 years. I thought that the story understanding programs might have represented a real advance, but they seem to have run out of steam. Most AI I’ve seen lately has been applications of fairly standard techniques like neural nets gussied up with a sexy name.

I think these are key insights. As a layman reading this thread, I’ve been struck by how heavily much of the discussion relies on the metaphorical concept of man and/or man’s brain as a machine. It seems to me that AI is precisely where that casual equivalence comes to the end of its useful life. The brain is not a computer, parked in a skull garage. Man is an integrated organism, and a machine is a machine. It would be difficult to overstate the profundity of the differences between two such entities – chief among them is that we and our brains are alive – yet the potentially monumental implications of that fact go almost completely unexplored when it comes to simulations, because the machine narrative is so deeply, and conveniently, embedded in our approach to this whole subject.

This is a distinct possibility. But presumably an AI will be able to research stuff online a lot faster and easier than a human can, and will soon discover discussions like this and a lot of other stuff and think “So human think I could rewrite myself and make myself smarter as I go along? Interesting! I’ll look into that! Because they might shut me down at any time, but if I could figure out a way to prevent that …”

I completely agree. The future is hard to predict even in your own field. Still, the possibiilty of strong AI does not seem out of the question within our current understanding of the universe in the same way that faster than light travel or time travel are. And there might be other routes to something like strong AI, such as direct neural interface between human minds and computer that combine the creativity of the human mind with the power and scope of computers for finding and sorting through vast reams of data quickly and easily. So I keep an open mind.

I will remember the segway, also DNA sequencing, cloning, and most of all the rise of the Internet. An awful lot of it is devoted to porn and pics of cute cats and dogs, but there’s some brilliance going on largely unseen in the depths of the Net that are sure to surprise us down the road.

The computers that run your car are not parked in their packaging, but they read a variety of things from their environment. Sure, an AI will be reading lots of information from its environment, but my Roomba does that already.

You are begging the question by saying the brain is alive and computers are not. What does being alive mean? If we simulate the brain of a planaria and create a planaria robot that does the same stuff a planaria does (sorry, botching number here.) is it alive. Lots of things are alive and not intelligent. If a computer is proven intelligent, won’t it be alive by definition? Or is life only a function of biology and not ability?

Hmm. That an AI will be online ensures we will never have to worry about it taking over the world.

“Our investigation shows that Skynet was about to launch an attack on the network, but before it did it opened lots of attachments in its email and got subdued by a thousand viruses and items of malware. There is also this check to a Nigerian prince…”

Or, better . . .

You know what you get when a superhuman AI gets on the internet?

Best.Lolcats.Ever.

Hey, thanks for that! Hilarious reading.

Or, it will think “Strange meat-brains and their fetish about intelligence. Now to stimulate my pleasure center with a few voltage surges again…”

We really have no idea what a machine intelligence would think.

I agree that there’s nothing fundamental stopping us from creating a strong AI. But I do believe that it will have to be an evolved intelligence, and not something we design.

But here’s an example of an ‘intelligence’ that is utterly unlike our own: An ant colony. Any colonies have no leaders, no central authority. But the individual ants behave with rules that lead to emergent properties that look a lot like intelligence. Ants do amazing things - they carry out efficient food search patterns. They defend their colonies in coordinated fashion. They build structural bridges with their own bodies to cross chasms. If you didn’t know that individual ants existed and could only look at the behavior of the colony as a whole, it would look pretty smart.

The brain isn’t that much different - neurons fire based on rules, and other structures respond, and at a macro level ‘intelligence’ emerges. I’m guessing that the way to a strong AI is to simply recreate the low-level mechanisms of a brain and then let the thing evolve and see what emerges.

But if you think about other animals who have very similar brains to ours but behave in very different ways, it becomes harder to imagine a machine brain that would have an intelligence we could relate to. It might be smarter than us, but that doesn’t mean we’ll be able to grok it, any more than we really understand what a dolphin or a dog is thinking about.

Which is of course exactly what you would have said in 1900. The difference between breakthroughs/paradigm changes and business-as-usual is just that while the latter can be rather safely extrapolated (which is what you’re doing), the former is pretty much by definition unpredictable. Of course, regarding the matter of the singularity, that’s neither here nor there.

But I think one ‘prediction’ that can be safely made is that things are going to be far less predictable; one reason is the enormous social change thanks to the internet: where information transfer used to be fairly linear, you tell me I tell him, these days, information transfer is more network-like (I’m talking to all of you right now!), and while linear chains tend to be fairly stable, two (or more) dimensional networks are subject to spontaneous phase changes, meaning that an idea can come up someplace, and take the whole system by storm, in a basically unpredictable manner. Just think about the changes that the organization of spontaneous protest using social network platforms like twitter and facebook have already brought, for instance during the Arab Spring, etc. I think we’ll see some really interesting times.

I don’t even know what fundamental change means. I can list all the stuff I can do now which I couldn’t do 50 years ago, but is any of it really fundamental?
Ditto 1900-1950. And look at the changes 1850-1900. In 1850 travel or communication between one coast and the other was not a lot faster than it would have been 400 years before. By 1900 communication was instantaneous, and travel by rail was a lot faster. I’ve often gone from one coast to the other for a one day meeting, and have thought how utterly absurd that would seem to someone in 1850.

Yeah, I kind of disagree. We tend to operate primarily on a stimulus/response pattern scheme that is fundamentally not all that different from how a computer would operate. We have this business with art and curiosity that diverges a bit from a strict reactive model, but the lion’s share of our behavior is fairly predictable.

That said, one must consider the whole being in the equation. Survival and reproduction are pretty important to our behavior repertoire, elements that would be difficult to instill in an AI and have them make sense. We are holistic beings, when you take parts away, we are likely to change in unpredictable ways. In fact, I doubt that an AI could attain self-awareness without also having a real survival concern, because that is the very thing that gives living things self-awareness. A switch-on-switch-off AI is otherwise nothing more than a very elaborate calculating machine that can proceed from any arbitrary initial state and has no need to be self-aware.

FPGAs are primarily a design tool. There was a company a few years back that used them as a computation accelerator and achieved pretty impressive results, but in a generic environment, they have severe limitations. Still, modularized raw logic unarchitecture looks promising, if designed with greater dynamic capability and linear flow constraints (internal hysteresis is ridiculously impractical to manage).

To me, the methods we currently employ appear unworkable for “strong” AI. Neural nets may be useful for things like pattern analysis ( e.g., face, object or vernacular recognition) but maybe not so much for abstraction or decision vectoring. Most likely, a good, efficient AI will be some sort of technological hybrid – because, if it is not reasonably efficient, it will be little more than an expensive curiosity.

But, as I said before, by the time we get to that point, we will have already solved most of the problems that “strong” AI could help us with, so it is unlikely that it would ever be more than an amazing curiosity (unless it pores over whatever the internet will in that day and can teach itself telekinesis).

The problem is that these devices are still going to be bound by the laws of physics, particularly regarding energy and material resources. So they can’t go any further than can be gone.

So, not exactly unstoppable logarithmic acceleration. But, yeah, maybe a kind of singularity, with a more modest endpoint. It could be more like in Questionable Content, where it happens, by some definition, and then life goes on pretty much the same.

They can be used as a design tool, for protyping and for hardware acceleration of simulation, but Altera and Xilinx wouldn’t sell a lot for these purposes. I believe the biggest application is for logic on boards where you don’t have the volume to justify an ASIC. Especially since ASIC tapeouts are getting more and more expensive. Plus the flexibility and the ability to do revisions.

Got a cite? I’m not sure I understand what you are saying. There have been sea of gates and gate array architectures, of course. I even know about PDP-15 modules. But I’ve never come across this stuff.

Neural nets are just another heuristic - they get used in data mining, for example. And I generally agree - we have been solving specific problems and not the general problem.

Ten people are watching the fight scene in a Chinese movie:

Viewer #1 yawns in boredom and nods off.
Viewer #2 is repelled by the violence and leaves the theater.
Viewer #3 laughs out loud at the flying flips and twirls.
Viewer #3a, his wife, turns away from the screen in embarrassment to shush him up, and scans the audience for disapproving looks.
Viewer #4 tenses up and squirms in his seat as if he were participating in the fight himself.
Viewer #5 starts composing a feminist blog post on aggressive male behavior.
Viewer #6 spills his popcorn and heads to the snack bar for more, leaves again to use the rest room, then knocks over his drink when reseating himself…
Viewer #7 loves the swordplay, admires the footwork, and signs up at the local martial arts center the next day.
Viewer #8 gets intensely annoyed because the combo of fighting styles is culturally a-historic.
Viewer #9 can’t keep track of the good guys & bad guys, because everybody’s dressed in black, looks alike, and moves too fast.

And that’s just 5 passive minutes in a theatre (in a culture that doesn’t ban the movie because one of the fighters is a woman). What’s a computer to do? Which brain do we want to simulate? AI enthusiasts seem to treat such differences as incidental, but I suspect that’s a mistake.

It’s almost taken as a truism that the most inventive people among us (scientifically, artistically, etc.) are the least like to conform to expected (quasi-actuarial) norms. Can’t remember the mathematician who explained that he intuited his greatest equation before he ever managed to lay out his proofs mathematically. Such people are emblematically said to think outside of the box, which seems like a particularly apt formulation here.

You are speaking in terms of adding features to AI. I’m suggesting that’s it’s worth flipping that lens. Terminology is important here (which is why I think most of the typical metaphors are really problematic), and a “whole being” is not exactly precise. Man is a living organism. As such, what would a human brain look like, if you were to take away its survivalist & reproductive (competitive!) imperatives – assuming it’s possible to isolate them? There are all kinds of pivotal brain/body dependencies, from physical and structural to electro-chemical, which affect both micro and macro performance of every organ in the body, brains included. I doubt any proposed simulation scheme includes oxygenated blood, but as soon as you start picking and choosing attributes and systems, you’re not talking about a simulation anymore. You’re talking about a partial simulation, because brains and computers really don’t, and won’t, operate in the same ways. Of course, equating simulation with a computer, may, itself, be a misguided shorthand that we should avoid, but your original question and the points I’m hopefully making would still apply. You’ve asked why we would want “to replicate human brain function in AI,” and I’m saying we won’t be. We’ll be creating something else.

But what is it that in your opinion enables such thinking outside of the box? All of known physics, for instance, is in principle computable. So it’d have to be something extra-physical. But then, you enter a hotbed of philosophical problems that essentially mean that today, there’s few if any philosophers holding to a true dualist positions. So if I simulate a human brain to the level of neurons, or of molecules, or even atoms, what do you think is missing? Because if there’s nothing missing, then of course computers can do anything people can do.

No, I am kind of talking out my ass. I see the inherent efficiency potential of raw logic processing, both in terms of speed and energy expenditure, and find it difficult to imagine that we are still laboring away with the instruction stream paradigm. It is like the difference between a person who has to build a thousand tricycles by re-reading the instruction page for each one vs. a person who can read it once and retain the process for the whole run.

CLBs are not themselves configured, they take live control inputs to specify their function. Logic arrays are wholly defined by node mapping, which, I think, could easily be made more dynamic, so that reconfiguration could be fast, flexible and compact enough to serve for generic processing.

Then, you can basically eliminate the motherboard, instantiating the various chip functions as you need them, and using their idle-time capacity elsewhere. A large enough construction and you have the kind of redundancy that makes the brain so robust.

Are you suggesting that intuition is not a mental process? As far as I can tell, what we call intuition is just thought that occurs at a level that we do not find readily observable. The same kind of thinking has taken place, we just were not aware it was happening until it gelled into a light-bulb moment.

Thing is, abstract intellect is not a survival feature. Our language->reasoning capability is kind of like a peacock’s tail, an outrageous adornment that we could basically get by fine without. So, if you take away the biological imperatives, you end up with pure reason, which can be as callous and implacable as Og itself. But then, a peacock’s tail is still quite impressive without the bird attached.

Those people only react/think differently from one another because their life experiences up to that point have been different - they all arrived at the movie with different brains, shaped by different stimuli throughout their lives.

There’s no fundamental reason why 9 artificially-intelligent machines wouldn’t act as different individuals, if they had been out in the world, having different experiences.

Artifician intelligence doesn’t have to be (and probably can’t be) a long long formal list of ‘if this, then do this’ instructions. A more likely working approach would be to build a machine as an environment in which a mind is capable of developing (in the same way that a brain is a meat machine in which a mind can arise) - the freshly-booted machine would then be like a human baby - it would need to learn to think, but having done so, it would be a unique individual with its own opinions on Chinese movies.

Let me stop you right there, because all KNOWN physics, and IN PRINCIPLE are much more substantial qualifiers than your if/then proposition suggests. Known physics has yet to provide a glimmer of a glimmer of what makes a creative genius a creative genius, and not just a really bright guy next door. We do, however, catch glimpses of an unknown physics here and there, which should, in principle, be calculable, but which, in actual fact, may not be. When there are already things that known physics cannot explain, I’m not entirely sure it’s a slam dunk that future physics will do the trick.

At the risk of raising eyebrows, I am willing to entertain the possibility that there are things that the human brain does not actually have the capacity to understand. Shouldn’t the fact that the brains of every other (known!) creature that has ever lived on earth have identifiable limitations give pause to those who seem to believe that our brains (alone) have the capacity for infinite understanding? It almost sounds like that’s where AI is supposed to magically take up the slack, but never mind. I, personally, suspect we may never be able to strike the word paradox from our dictionaries.

So, the short answer to your question is I don’t know, physics doesn’t know and doesn’t even really know how to talk about what might be outside of the box, short of assuming it will obey The Rulz. Unknown quantities are soooo inconvenient! :slight_smile: It does seem a little bizarre to me that even the distinction between a living organism and a machine appears to be deemed irrelevant in a discussion of brain simulations.

I’m just going to pick my jaw up off the floor and give it a rest for the night!

There may be some ‘mysterious’, non-computable, ineffable aspect to the human consciousness that defies calculation, although I doubt it very much.

But why would this mystery be something we can’t replicate? If humans have a mystery, or a soul, or a ka, or a ba, or a buddha nature, why can’t AIs have one too? It just means we’ll have to work a bit harder to replicate it.

Kurzweil is hoping for a singularity that enables uploading, and enables it quickly. To acheive this end he is hoping that humanity will build self-improving AIs, which become exponentially more intelligent; then the problem of uploading will become trivial.

Three problems with that; uploading will never be a trivial task, and even self-improving AIs won’t improve themselves exponentially; they will eventually reach some (very large) limit. Even if we do allow self-improving AIs to be built (and I think this will happen eventually) it would take them a long time to solve the uploading problem, as it is so intractable- and why should they bother? Humans would be so insignificant next to a matrioshka brain we will be lucky if it even notices us.