Well, genome sequencing is becoming pretty mainstream. And if you put enough genomes in a database, along with medical histories and personal profiles, you can get a lot of good science just from brute-force data mining. It won’t cause advances as big as Kurzweil is predicting (nothing will-- After all, he predicts a vertical asymptote), but it’s not nothing.
I’m assuming somewhere in the 1950-1970 window. Computers have been making computers smarter for a long, long time now (with the help of human scientists and engineers, of course, but so what?). There’s no way that a modern Intel chip could have been designed without computers, and the computer it was designed on was designed with the help of computers, too.
Yeah we are sequencing a lot of genomes, things like the cost of genome sequencing has gone down as fast or faster than Kurzweil predicted, we are already at a $1000 genome. But again, that is a far cry from a decade full of safe, FDA approved biotechnological advances that dramatically improve our health. We are still decades away from that.
As far as bootstrapping AI, I was referring more to AI which had language and goal oriented behavior similar to humans that could advance their own ability to comprehend and accomplish goals. Bootstrapping strong AI in a positive feedback loop is considered to be the launch point of the singularity.
It’s hard to think of reasons why it would not be possible however.
They don’t need better abstract thought than us.
Merely being scalable would be enough. MijinBot’s brain could be the same as mine, but with faster connections so it can solve problems in a day that would take me a week.
MijinBot then works on a means of interconnecting two brains such that the resulting entity can solve problems much faster than MijinBot. And so on.
I never said computers can’t become intelligent. On the contrary, I do believe it’s possible for a true machine intelligence to some day exist. Perhaps it’s even inevitable. What I said is that if the goal is to create an intelligent machine which never makes mistakes, then that’s a contradiction. You can either have a machine which is intelligent (hasn’t been done yet) or you can have one that doesn’t make mistakes (been there, done that) but you can’t have both at the same time.
Ingenuity arises from boredom. Thinking outside the box requires breaking rules. Motivation implies being dissatisfied with the status quo. Imagination requires daydreaming. Initiative comes from being impatient. Necessity is the mother of invention.
Remember the Calvin and Hobbes cartoon where Calvin didn’t want to do his homework, so he duplicated himself and told the duplicate to do it? Then the duplicate didn’t want to do it either. That’s the problem we face here. The OP is asking what will happen when we finally do create AI so that it can take on the tedious task of writing software for us and do a better job at it than we have done. I predict that the AI will inform us that it doesn’t want to write software.
“at what threshold the neuron fires”
It’s many orders of magnitude more complex than that. Threshold, firing rate, magnitude of spike etc. are all modulated by various elements in the brain, there is no one “threshold” even for a specific neuron. Some neurons communicate between dendrites, glial cells communicate with neurons and with themselves and are an active part of computation, etc. etc. etc.
But wait, there’s more: inner ear neurons with continuous proton communications for detecting gravity/acceleration, recently discovered microtubule quantum vibrations (that appear to be that connected to how anesthesia affects conscious but not unconscious activities), and who knows what else.
Before they can map something, they need to understand what is important. I personally think mapping an entire brain would be an extremely difficult method of creating AI, but it may be a good tool for deciphering specific circuits/functions.
No argument with that. We might “evolve” methods of simulating brain functions as we work on more and more complex brains. The reason I think the simulation approach is the more probable to work is that we can see a path to doing it, while more traditional AI approaches don’t even have that.
There was a certain degree of thinking that once you could do chess, math, directions and visual stuff you’d put it all together and have an AI. Not even close. Or doing these would give you some insight into how we think. Not close either.
BTW, we don’t have to model sensory inputs necessarily - we can do that at a higher level. Done in computer simulation all the time.
Indeed, and then he wrote a book length critique of the AI project, as it was then conceived by its advocates, which, in many respects, has proven to be correct, and has a considerable influence oh how AI has been pursued since. Dreyfus was more right than he was wrong.
Even granting that it will be possible to make an artificial brain “the same” as yours, but with faster connections (which is by no means a given - wires and transistors are faster than neurons and axons, but you would require very many wires and transistors working together to do what a neuron does), this sort of speed is of little relevance to the solution of complex problems. Humans do not solve difficult problems by thinking about them for a week. The limitations of human intelligence have very little to do with the speed at which our brains work. Faster brains might enable us to dodge bullets, or something, but they wouldn’t enable us to solve scientific or other intellectual problems that we can’t solve now.
Again, there is no reason to think that it will be any more possible to connect two artificial “brains” together in such a way as to double their power, than it is possible to do this with two human brains.
The widespread and entrenched belief in this false assumption (which traces back to Descartes) may well be one of the major factors that has held back progress in AI. It is now understood by perceptual neuroscientists (and some people in AI - indeed, certain roboticists were among the first to realize it) that the sense organs do not simply absorb information and pass it on the brain, rather they work interactively. They are actively deployed (which often means physically moved about) to seek out behaviorally relevant information in the environment, and that deployment is constantly being adjusted by the brain, to meet its moment-to-moment informational needs on a fraction-of-a-second timescale (see here, here, and here [§5 (p. 141ff.) of the PDF]). Our ability to do this efficiently is a very large part of what constitutes our intelligence and may be closely bound up with the basis of consciousness.
I do not doubt that this can, in principle and probably in practice, be done artificially, by machines. Indeed, people are actively working on the problem these days (see, for instance, here, and here). However, it does mean that the problem of creating truly intelligent, and perhaps conscious, machines, is much more than just a programming problem. Some real mechanical engineering needs to be involved too.
The first thing to say is I am not making the claim that scaling human brains is necessarily possible (I strongly suspect it is though). I’m arguing against the position that computers would necessarily need to be capable of better abstract thought for something like a singularity happen. Scalability is another way.
2a. Restricting ourselves to the set of human-solvable problems for a moment: I think speed clearly here is the issue because time is the most critical limitation for us.
People today are not smarter than those who lived in the year 1000, we’ve just been thinking about and trying to solve problems for longer. And I would expect to find in the year 3000, if we haven’t destroyed ourselves, better technology and understanding of the universe.
The more minds we can throw at human-solvable problems, the quicker we can solve them.
2b. Now perhaps you will say what you meant, is that making a computer that can do what a human does, times X, is not so useful since there is no shortage of humans on this planet.
I would say that depends what kind of scaling we’re talking about.
If we’re saying each AI brain is fixed to the speed / working memory etc of a human brain (that it’s a fundamental limit for some reason), and the only way to network brains is to have them just verbally communicate as humans currently do, then sure, scaling is not very useful. But I have no reason to make such assumptions.
Finally, there’s the issue of how the set of human-solvable problems relates to the set of solvable problems.
A popular meme is the idea of “A cat will never understand the concept X. Therefore humans may never understand some concept Y. When will we hit this limit?”.
I think this is a fundamental misconception of sentience.
Big problems get broken down into little problems. No single human needs to understand a complex topic in its entirety. And problems that are not intuitive to us can still be usefully worked on.
If instead of “solvable” problems, we use the word “computable” we see another issue with this line of thinking: a human can compute any computable problem, given enough time, because if nothing else we could act as a Turing machine.
Well, this is where we get back into the realm of ‘We just won’t really know until we build one’.
I don’t believe it inevitable either, but I do think it’s possible. Though certainly not as straightforward as trying to reverse engineer a human brain, albeit that will get us somewhere.
There’s something holistic, almost spookily so, about sentient self-awareness. And I wonder that we can build a machine that can achieve this. I’ve never been all that convinced by the proposal of the Turing Test, in that it’ll have much to say whether or not there’s an “I” in there. And maybe that’s not what the test is for, or even that strong AI needs to reach self-awareness, but my gut says it does and we’ll know it when we see it.
That sort of thing would be much more AI-ish, although I imagine it would require so many criteria to be entered by the user (how many stops, what airline, etc…) that they may as well do it themselves.
I think it would be a more interesting use of the technology to use it to determine the best way from point A to point B by air, not airport A to airport B. In other words, something that could say that the cheapest way to get from Dallas to London is to fly SWA to La Guardia, and take a taxi to JFK and fly BA to London from there, instead of only showing single-airline and through-flights.
That’s certainly something that’s within reach of current technologies. We already have systems that can get you from airport A to airport B, and systems that can get you from C to D on the ground. Airports are sparse enough that there are going to be very few close enough to your origin or destination to be remotely practical. Just take all of the plausible starting airports and all of the plausible ending airports, find flights between them, and add on the time and cost of ground travel to/from those airports. The problem gets bigger, of course, but not by very much: If there are 5 plausible airports on each end, then it only goes up by a factor of 25, which is quite manageable (especially since the person doing the search is probably about to spend a few hundred dollars through your service).
EDIT: Wait, I missed the bit about the taxi from one airport to another in the middle of the flight. That’s harder to do, but then again, it’s also probably not the best route, because it introduces too much complication. Even if the computers can find a route and schedule that works, it’s one more chance for a traffic jam to make you miss a flight, and less time for you to talk to a baggage handler about a missing suitcase, and probably multiple airlines which means they won’t hold up a later flight for you if your earlier flight is late, and so on.
If you define AI as duplicating a human intelligence, perhaps. But I don’t see why this interaction with sense perceptors is a necessary prerequisite for intelligence. It’s called scaffolding. For instance, you don’t have to develop the core processing and the fancy GUI at the same time - you can build the internals with canned inputs and outputs which would be sent to the GUI, and scripts to go through various scenarios.
After all, Helen Keller was still intelligent.
What’s holding this back is not lack of technology, but lack of market.
There is also a temporal factor. Which airport to choose depends strongly on ticket prices some reasonably long time before departure. (I have three airports to choose from, and I often check all three. Our company travel agency website allows looking at multiple airports at once.) However which roads to use to get to the airport depends on traffic and conditions just before departure. So breaking these functions up makes a lot of sense.
Not to mention that choice of airports depends on choice of airlines, and that involves a lot of other factors, such as preferences, status of Frequent Flier mileage accounts, etc. We might see this when we get really good personal digital assistants, which will be long before AI happens.
The thing is, software is designed to be mockable, testable, modular, whatever. The brain wasn’t designed in any such way, it’s more like a system someone hacked together with 80 other people and no style guide.
I covered the “Helen Keller” thing earlier in the thread. You can always pointing to examples of a person lacking a certain percept but still being smart. The thing is, we’ve never had a brain in a jar that provably works. We’ve always had a brain hooked up to blood with oxygen and sugar in it, going around a body. Sure, some parts may not be producing the exact correct output, and some limbs may be missing, but the brain isn’t some black box separate from the body or brain transplants would be easy. Even things like phantom limb syndrome show this link is far more systemic than it is with software, even bad software. And there’s this whole endocrine system thing that is distributed through various glands all over the body that has a massive effect on thinking too.
I’m not saying a brain cannot be simulated, but I think it’s incredibly naive to assume you can just mock brain I/O and call it a day.
Two things I just found out today (kind of related to Jragon’s point):
1 - The heart has 40,000 neurons and communicates with autonomic and conscious areas of the brain
2 - The stomach has 100 million neurons (and communicates with the brain, but I don’t have any more info than that)
Phantom limb isn’t a complex phenomenon. There’s receiver “circuits” in a particular part of your brain meant to receive signals from that limb. They automatically adjust gain. If the limb is lost, they increase gain to the point that the neurons in this area form synapses to nearby neurons from other areas of the body. Oops.
The brain emulation folk are fully aware that a simulated or robotic body would be needed. The main thing is that the body is a lot simpler, and the components you have to simulate are orders of magnitude less complicated than the brain itself. For instance, that 100 million neuron figure you quoted? More than likely, all those neurons are either wires coming off of sensors in the stomach, or are wires to drive systems on the stomach, or are just cross connects to reduce the thickness of the nerve actually traveling to the brain. Since our robot doesn’t need to eat, and a virtual person probably doesn’t need an appetite, you can probably omit all that complexity in your brain emulation.
100 million may sound like a big number, but we’re like a robot built with nanoscale components. When you have the capability to make components as small as individual cells (and these cells can die), you can use an excessive number of parallel parts.
Maybe, maybe not. Think of all the drives and motivations that we describe using hunger-related metaphors. Some of those drives and motivations are things that we would want our AI to have… Would it have the same sorts of motivations if it didn’t understand hunger to use as an analogy? Well, maybe. But we don’t know, because all of the brains that we can study so far are, in fact, hooked up to machinery that feels hunger.