Well, how long does it take you to do a numerical integration to 1% precision?
No, it doesn’t. Faster computers give you faster results. Better programming gives you better results.
But computers can do anything neurons can do, i.e. they can simulate them explicitly; thus, on a sufficiently powerful computer, you can simulate a whole brain. Whatever a human brain can do, that brain should as well be able to do (though there are dissenting points of view here, mostly thanks to the Chinese Room argument due to John Searle; this basically boils down to accepting that for some reason, the functional characteristics of the brain are not sufficient for the je-ne-sais-quoi of conscious experience).
There’s also an often-heard, but fallacious, argument that a simulated breeze can’t knock over a real house of cards, or that simulated rain won’t get you wet; but this hinges on either a level confusion (if you were part of the simulation, you would get wet), or assumes its conclusion (that wetness is some subjective part of experience that can’t be computed).
And the digital model is not the only one when it comes to computation. For instance, neural networks are based explicitly on the principles (albeit simplified) of biological neurons: inputs from other neurons are tallied based on weights assigned to them, and if a certain threshold is exceeded, an output is produced.
(Nevertheless, when it comes to AI, if we go with the historical record, betting against it has been the safe choice since the 70s.)
Better software is useless without the necessary processing power. You can try to run Windows 8 on an old 486 machine…but it will choke on it.
And we’ve been updating our graphics software ever since there’s been graphics software, so your argument is beside the point.
I’m asking you to explain how computers have been faster than a human brain for the last 50 years. Can you back that up?
Computers are faster than a human brain.
The computer on my desk can calculate the cube root of an arbitrary number in less than a second. It would take me a lot longer than that, especially since I’d have to look up the algorithm first.
Even 50 years ago in 1963 a computer could calculate a cube root faster than a human being.
Therefore, computers have been faster than humans for 50 years.
Oh, calculating a cube root isn’t what you’re talking about?
Thing is, faster computers don’t magically make difficult problems solvable, they make solvable problems solvable faster. So if we could simulate a human brain given enough computational cycles, we could simulate a human brain on hardware that ran at 1/10th the speed, it’s just that the simulation would take ten times as long to run.
If you show me a computer system that could pass a Turing Test except for the problem that it takes a week to generate a response, then I’d agree that faster computers would solve the problem, we just need to turn a week into a minute. And all we need for that is more processors running faster. That turns “technically possible but infeasible” into “shit that works”.
Except that’s not where we are with AI. It used to be a staple of AI discussion that all we needed was faster computers with more resources and AI would be trivial. Except that turns out not to be the case. What we call “AI” nowadays isn’t HAL, it’s expert systems that can drive a car, or play chess, or answer Jeopardy questions. We’re nowhere close to strong AI, and we aren’t any closer today than we were in 1983, except now we have a lot of our optimistic assumptions debunked.
Yeah, good luck with the naysaying. ROFL
Sure. But if you run windows 1 on a modern computer it looks exactly like it would on an old one. It would just be faster. Processing power alone gets you the same answer you would with a slower computer, but you get it faster.
More processing power allows you to run more complicated programs that might be impractical on a slower computer, but the result will be exactly the same. Your idea that a faster computer somehow calculates things differently simply by virtue of being faster is makes no sense.
Provided we actually know exactly how a neuron works and can mechanistically emulate it. Even a neural net model is probably an over-simplification; if neurons are complex in the mathematical sense, with highly non-linear results from initial conditions and chaotic feedback loops, then modeling them is going to be a bitch. It might take a molecular level simulation to faithfully duplicate how a single neuron works.
We already have extremely good models of neurons. We have had them for years. What we don’t currently have and probably wont have is the ability to model 50 billion neurons working together anywhere near real time.
He is absolutely correct. First, if he’s speaking as an engineer or scientist, saying “zero percent” means the error term is 0.5%, so you could have half a percent chance of living forever. But no, you don’t. The odds of you living forever are exactly zero, even if the universe and the conditions for life existed forever.
That doesn’t mean it’s impossible, though, and since you’re willing to grasp at unlikely straws, I’ll explain.
The probability is zero because every year, you have some fixed probability of dying (due to accident or whatever, regardless of medical technology). Likewise, you have a fixed probability of living. Let’s call that Pl for probability of living. The probability of living 2 years is Pl squared; for three years it’s Pl cubed. Since Pl is always less than 100% (or, less than 1), it gets smaller every time it gets multiplied by itself. So, Pl to the infinity power is exactly zero.
The reason that doesn’t mean it’s impossible is because there are two kinds of zero, oddly enough (yet they’re equal in value). The probability of rolling a seven with one standard die is zero, and it will never happen, and has never happened. That’s one kind of zero. The other kind of zero is an “infintessimal”. Example: One kind of electron cloud has a roughly hourglass shape, with the skinny part in the center of the atom. The cross-sectional area of the cloud at the center is zero. This means that the probability density of the electron existing at that point is zero. Yet the electron can be found on either side of the center. It goes through the center, so it’s not impossible. However, you can measure it an infinite number of times and will never find it there, with probability 100%. Isn’t math fun?
However, conditions for life in the universe are unlikely to exist forever, thanks to entropy. Google “entropy death of the universe” and I bet you’ll find lots of fun stuff to read, for someone who has dreams of living forever.
Again, he’s absolutely correct. Now, to how many decimal places did he mean that? That would be hard to calculate, but someone with more skills than I could calculate an upper bound for a much smaller age, 122. That is, we could say with certainty that the probability is less than some number X, based on doing the math of how many people whose ages are documented versus how many people whose ages didn’t exceeed 122, and the odds would be somewhat less than that (to reach 5000).
As an off-the-hand guess, we could probably say that there are more than 1B death certificates, and only one with an age of 122, so your odds of reaching 122 are less than one in a billion. Of course, you like long odds, so you’re OK with that. It fully corroborates what your friend said: zero percent.
Minor quibble: it has a lot to do with speed, and speed does tend to track density. But your point still stands, and you’re right that Moore’s law does not speak directly to speed. (It didn’t mention disk drive space and speed either, but that has also risen exponentially at a similar rate, despite a much more tenous connection to transistor density.)
Or even non-real-time, as graphics are done for movies. Lately we’re doing quite well by modeling one cortical column", and that, not nearly in real time, and it takes an extraordinary number of resources.
The increase in processing power and data storage will have an enormous impact on our ability to solve problems, but it will not solve the problems for us. It won’t go much faster than our ability to understand what we’re learning and write new programs. AI holds great promise but so far, after over 50 years of research, it’s shown results that to laypeople would seem pretty small – that is, trivial compared to human intelligence, or even rat intelligence. Increased processing power will help dramatically, but we still have no idea how actual “intelligence” works, and just throwing processing power at it won’t provide that understanding.
Even if we model a brain based on detailed anatomical study and it works, we still won’t know how it works! We’d get a great tool for studying it, but a tool provides insights, not answers. Plus we’d need a psychologist to keep it from getting depressed and ruining our studies.
The programming model of “neural nets” was inspired by biogical neural networks, but have little to do with how they actually work.
Wait. I’ve got it. All we have to do is reverse the phase polarity of the photon feed through the duotronic transtators and then couple it to the quantum field generators!
Presto! Strong AI and eternal life at the same time.
You’re welcome, royalty payments may be submitted to this address.
No. We’ll have computers with an amount of storage, measured in bytes, about equal to the number of neurons in a brain.
That’s nowhere near the same processing power, not by exponential orders.
Strong AI isn’t likely in our lifetimes. Sorry.
Of course, prediction is always difficult, especially about the future. But we’re not that far from processing power on the level of that of the brain; wiki states that it takes about 4*10[sup]16[/sup] operations per second to simulate the human brain, which is close to what is currently possible—much closer than I would have thought, in fact. Usually, the advent of computing power equivalent to the brain’s has been relatively stably predicted for the 2020s (at least since 1997), if wiki’s numbers are right, it might be even sooner. (It might help that there are several large-scale projects—such as the Human Brain Project, one of the two EU ‘flagship’ projects to be awarded grants on the billion Euro scale this year—with brain simulation as their explicit goal right now.)
Whether or not that’ll give us strong AI I won’t speculate however.
Even if that works, though, all it would get you would be something that could, in decades, learn to think as well as a human. We can already produce those, and the process is a lot more fun than programming is.
Yes, but show me a parent who’s never even once wished for a pause button…
Or mute…
Actually, on thinking about it, my previous post was far too optimistic. A person with Down’s syndrome, for instance, has just as many neurons as a normal person. And the same can be said of a vast number of other forms of mental retardation. There are a great many things that can go wrong, so it’s vastly likely that our first brain simulation will be a simulation of a profound moron.
Further, even if we do get to the stage where we can simulate the brain of a supergenius, that brain will still have to learn. And doubling the speed of the computers won’t double the speed that the supergenius brain learns at, because a great deal of learning comes from interactions with other humans, and hence is limited by the speed of those other humans.
It took a billion years of evolution to get from Archaea to humans; we can’t expect to get a working human mind over night. But we will get much more interesting non-human AIs long before we can replicate the human mind.