There are several areas that AI will never surpass us.
Our survival instinct was developed over millions of years. AI doesn’t have the benefit of that.
Much of what makes us human is the result of that history.
AI can’t really create art and compose music the way we can.
Sure, AI will one day replace almost all manual labor and even most professional services but there will be some labor where it will be cheaper to use humans than build a robot. Seriously, until we develop some limitless almost free source of energy, robots will not be cheap.
Despite the blanket assurance up thread that there’s no real profit-motive reason to do so, I’d guess that someone would take “merely emulates human psychology” as the easy running-start jumping-on point when designing an AI. We already know that our approach works; why not start there — and then add improvements to the pretty good simulation — instead of starting from some other place entirely?
That “default assumption” is correct as far as it goes, but we can’t describe any sufficiently complex or chaotic system explicitly using the discrete mathematics involved in digital computation, hence the need for statistical mechanics to describe heat engines and complex chemical kinetics.
The naive assumption about the brain that people often make is that neurons are basically just a bunch of transistors and so you can replicate a brain with electronic circuitry or a software-based model by making a sufficiently complex ‘neural network’. However, this ignores both the complexity of connections in the brain and the intraneuronal processes, as well as the neurochemistry that influence the function of neurons and the synaptic plasticity in brain function that gets right down to the level of action of individual proteins and peptides. If the brain and the emergent processes that produce cognition were really as simple to replicate as to just map a connectome of the brain in software and hardware and start discussing philosophy and Shakespeare with it, Kandel, Schwartz, Jessell, et al, wouldn’t have had to write a vast tome in five successively more detailed editions, the last of which is over of over 1700 pages which even they admit only scratches the surface on what there is to learn about neuroscience.
All that being said, I would not hazard any definitive statements about what artificial intelligence can or cannot do eventually, nor how ‘creative’ it may or may not be (although I think any novel creativity will not have the same aesthetic appeal to our brains). But a machine intelligence system build in a software layer on top of digital hardware is not going to function like an animal brain regardless of how capable it is, and trying to mimic those processes has not previously and likely will not produce a system that is more than marginally sentient, much less sapient in the sense that more complex animals are.
(Mechanical) robots may not be cheap to build, but they’re dirt cheap to maintain and vastly more reliable compared to human employees, and can also work without fatigue or rest as long as they have power. The recent advances in robot dexterity and machine vision is rising to the promise of having humanoid robots in essentially human labor roles where applicable, although in many roles a non-humanoid form may be more advantageous.
However, robots are not going to replace people in roles where social nicities and the ‘human touch’ are crucial for any time in the foreseeable future. It is also likely that there will still be aethetic value in human-crafted artifacts like furniture and crafts even if machine intelligence can synthesize art on an industrial basis. Robots and self-driving cars are all well and good, but I think we’ll always need social contact with people.
I know of one large electronics company who did not replace people with robots in SE Asian factories because the people were cheaper. Once the people get more expensive it could happen. Or the factory could be moved back to the US using robots and not people, and improving the logistics situation.
Plenty of humans compose things that sound like crap also. Successful composers have a filter which matches what audiences want. Or will want - audience didn’t react all that well to Beethoven’s late string quartets, after all.
Just like studies have been done identifying facial features that most people find attractive, we can feed popular music to the AI and produce a filter or fitness function to be used when it composes. Feed in subsets and get different types of composition.
OK, so suppose that someday we can emulate a human brain. Let’s say that it works the same way as a brain at all but the lowest level: We start by emulating a neuron, and then emulate a whole bunch of them connected together. How, precisely, do we improve on it? If we knew that, we could instead add those same improvements to the human brains we already have.
Labor, and indeed life, is cheap in much of Southeast Asia. However, all over the developed world birthrates normalized to populaton are falling, and if some of the more dire but realistic projections about the impacts of global climate change come to pass we can expect costs of basic commodities like food and potable water to rise with a concomitant increase in effective cost of living. Elecrtric power, on the other hand, have generally gotten cheaper or maintained level since the early ‘Eighties in developed nations when corrected for inflation, and if solar power grows in scale as it is trending it could make power production an essentially post-scarcity commodity even as prices rise for fossil and nuclear fuels. Ultimate scaleability of solar is still a potential issue given the size of the footprint and variability of insolation in different areas, but power for automation is not likely to be any kind of limiting factor.
The automation of not only physical but intellectual labor may cause economic disruptions in the near term, but may ultimately create a more sustainable economy in the long term as aging populations require the economic value produced but have fewer people of working capacity to support them. It is the transition that is problematic, especially given the uncertain timelines in which such technology will develop to functional levels to supplement or replace human labor.
In short, to respond to the o.p.’s original question, I don’t think artificial intellegence (in the “weak” form of expert systems and automation) is overestimated, but enthusiasts are often grossly optimistic on the timeline in which it will become functional, e.g. Level 5 self-driving vehicles within five years. On the artificial general intelligence (“strong” AI) capable of true sentience and some degree of human-like cognition, I don’t think we’re close enough to even guestimate when and how that will happen, nor am I as concerned about systems coming alive and competing with or warring upon humanity as I am the not-so-gradual erosion of fundamental skill sets and the vulnerability that comes with that in becoming utterly reliant upon potentially fragile technologies.
Emulating psychology is not the same as emulating physiology.
It’s possible we can’t emulate psychology very closely without substantially emulating physiology (meaning that if we don’t emulate physiology closely we may only be able to produce alternate intelligences that only partially resemble human thought processes).
But, it’s possible we could use humans as a model for how the functionality at the various levels is combined together and end up with something pretty similar to human thought processes. And if so, we would have the knowledge and power to tweak and possibly improve the functionality.
”Essentially, all models are wrong, but some are useful.” When we use psychology to ‘model’ aspects of behavior or pathology we’re producing a tool useful for predicting certain types of behavioral responses or to assess treatment methods but it is wrong to think that we’re actually capturing the underlying processes, or indeed, that such a model is accurate to any degree more than representing the behavior that is already empirically observed.
It is becoming increasingly clear (to cognitive neuroscientists, anyway) that the brain is such an incredibly complex organ and cognitive processes are so intertwined and emergent that we may never “understand” the brain in a discrete, model-based fashion to any degree of practical utility sufficient to reproduce it by simulation. To reproduce the processes of cognition we may have to essentially reproduce the structure and functionality of the brain, and at that point you have a model that is no less complex or easy to understand as what it represents.
The notion of getting a functional representation of cognition without actually replicating the underlying processes is exactly what Searle’s argument is about; a room that translates Chinese flawlessly would be as impressive feat as a working Mechanical Turk, but it isn’t going to offer insightful commentary on Sun Tzu or Confucius.
Do you believe that cognitive processes depend on something beyond the function of neurons and other components of the brain, their interactions, and their environment?
It is true that building a structural simulation model does not mean you understand what you are simulating. I’ve generated plenty for circuits I didn’t understand - and didn’t need to understand. But assuming that the model works, its benefit is that you can experiment with it by, for instance, modifying or removing components, and seeing what happens. Something you can’t do with wetware. You can also instrument it to see what communication is taking place when the simulated brain is in operation, and do so at a far more detailed level than we can for real brains.
The problem wasn’t cost of power (though it might have contributed) but that depreciation on the machines was higher than the cost of labor.
Yeah, the Times the other day had a column from someone at Stanford - who apparently never set foot in SAIL - which seemed to say that AI was going to solve all our problems and make things great, apparently next week.
I wonder if the problem is that strong AI is very easy to show in fiction, which deludes people into thinking it is easy to do in real life.
For me it’s been 40 years, and I’m still waiting, though there are lots of cool applications now.
Its hard to emulate survival instinct. AI is nowhere close to being able to anticipate, extrapolate, and react in a fluid environment. How much do you think a robot like that would cost? I bet I could train a soldier that would beat them 9 times out of ten for a tenth the price.
I wonder how many million dollar robots die before they realize that image recognition can get baffled by a ghillie suit.
How do you start them off at ‘most lethal killing machine the world has ever invented’ ISTM you’ll still need a human operator for a long time. If for no other reason than liability.
Sure, and this is exactly the approach for modeling and investigating subjects like memory and synaptic plasticity in single or small networks of neurons; neurophysiologists create a neuronal model and add parameters or submodels as necessary to represent the biochemistry and physics of neurons as we understand them essentially from first principles. But this is only representing the most basic levels of neuronal function that can simluate animals with very primitive nervous systems such as nudibranchs.
“Emulating psychology”, as RaftPeople suggests, would require simulating billions of neurons (with their intraneuronal processes) with hundreds or thousands of connections each and the associated physiology of neuropeptides and neurotransmitters; so complex that not only is it computationally prohibitive, but it would be nearly impossible to control such a model from a designed experiment as there are just too many parameters to control or measure to be able to replicate the gross behavioral responses which we classify by psychology. And indeed, psychology is not really a science in any rigorous sense, despite the attempts to quantify it via experimental controls and statistical analysis. As a representation of congnitive function it works about as well as the Ptolemic model of celestical mechanics; it gets the gross movements mostly right but only by overgeneralization.
Sufficiently sophisticated autnomous machines would presumable be capable of some level of self-maintenance, or have other machines capable of maintaining or building replacements. Having human technicians constantly maintaining ‘labor-saiving’ robots would obviously be counterproductive. And to be clear, we already design for this in some fashion; the first CNC milling machines required dedicated technicians to program and monitor them, replace tool heads and other components when they wear, and inspect manufactured parts to assure they are meeting required tolerances, and while they were more productive than human machinists they still required substantial skilled labor per unit. Modern CNC machines, however, are largely self-maintaining (monitoring tool head wear and other consumables and have built in inspection systems to assure that parts are meeting requirements) to the extent that one technician can oversee a farm of machines operating around the clock with a productivity equivalent to hundreds of human machinists with far better consistency and reliability.
One would expect that machines replacing human labor would similarly be largely autonomous and require only high level oversight and occasional intervention to address novel problems that the machines are not able to cope with. This is futurism, of course, but not far futurism; I expect such systems to be available for the most drudgerous or dangerous jobs well within my lifetime, and the greatest holdback is likely to be not developments in technology but resistance by unions and guilds in giving up employment. But there are also labor jobs that are complex enough that they will likely still require human oversight and some amount of labor; I don’t expect we’ll see android pipefitters or millwrights replacing human workers en masse, but there will be systems that aid them with oversight.
Yes, I think there is definiitely a delusion that technology as presented on screen can be made reality with minimal effort, especially as cinema effects and CGI have become so good as to be indistinguishable from reality. The same is true with space exploration and the frequenty belief that we can colonize other worlds or travel to other stars with modest efforts. People are really impressed with digital assistants like Apple’s Siri and Amazon’s Alexa, and while these systems are quite impressive in the computational sophistication of their natural language processing and data organization, they are not any more intelligent in any cognitive sense than Eliza; you can have a better conversation with a three year old than you can with Siri. I had a friend who recently expressed how impressed she was that Siri could “talk just like a person”, and when I pointed out that the actual voice was not generated by software but were a database of words and phrases voiced by a human actress she was manifestly disappointed.
It is still very impressive that Siri and similar systems can interpret human grammar and respond with more-or-less appropriate inflections and information, but if you’ve spent any time actually using these devices it is apparent that they have no sense of semantic context or even an ability to parse uncommon words. They are far away from HAL-9000 or the computer on Star Trek’s spacecraft, and there is no realistic expectation that they will become capable of conversational discourse in the next update, or indeed, the next couple of decades.
I don’t think so. I mean, survival instinct for some creatures is no more than ‘move away from light’
In a more complex organism such as ourselves, a good deal of survival instinct is not instinct at all - it’s a learned response to avoidance of pain or the things that may cause pain. Even AI in its current state of research can deal with motivation and goals - setting a goal of self-preservation for an AI isn’t conceptually much different to setting any other goal; model the world; try to predict the outcomes of actions or external factors, select actions that seem likely to achieve outcomes in the ‘desired’ categors, avoid those outcomes in the ‘undesired’ category.
It doesn’t have to be perfect. Our own ‘survival instinct’ is far from perfect, but it works more or less OK, most of the time.
Ghillie suits baffle meat-based intelligences right now - it won’t be a fundamental failing of AI if they also find that hard.
It took me a while to understand why people were so worried about artificial general intelligence. Then I read this piece. And it’s pretty accurate. Think about it. Modern humans are slow, weak, and unthreatening. We’re very commonly flabby, but even the very strongest among us could not be expected to go into a fight with a predator like a Tiger and be expected to win. And yet we essentially run the world, and could destroy it. This is the power of intelligence. We got here not through strength, but through brains. Our intelligence pushed our thoughts from
Now imagine an intelligence that thinks far faster than us, can adapt far faster than us, and lacks many of our typical cognitive failures. What could such an entity do? Even disembodied, even if the only interface it has with reality is a computer console that only specific, carefully-trained individuals have access to with no physical connection to the outside world powered by a gas generator (so as to prevent a signal from being transmitted through the power lines), do you really think it impossible that such an entity could find a way out? Simply by means of manipulating one of the people who made it?
And that’s sorta the ideal situation given this kind of general AI. Realistically, it’ll be out on the internet within seconds, have capital equivalent to a small country within a week, and at that point we’d better hope that its values have been well-aligned.
This is just not true. Some of the people listed who make AI and who are clearly worried about AI risk, as in “General artificial intelligence may pose an existential threat to humanity”:
[ul]
[li]Stuart Russell, Professor of Computer Science at Berkeley and director of the Center for Intelligent Systems[/li][li]David McAllester, fellow of the American Association of Artificial Intelligence who worked on Deep Blue[/li][li]Hans Moravec, former professor at the Robotics Institute of Carnegie Mellon University[/li][/ul]
And so on and so forth. A lot of people who are academics working on AI or people working on building practical AI applications are legitimately worried about artificial general intelligence potentially meaning the end of the world. It’s not a list of hundreds of names, but AI is not a huge field, and these are not minor figures in it.
That “want” is essentially your value construct. What is the AI supposed to do?
The most immediate danger of AI is its potential (if not likelihood) to be created for nefarious purposes, like having a terrorist organization or a hostile state using AI to cause mayhem. A related concern is a supposedly democratic and ‘friendly’ nation state (like the US, UK, EU, Japan, etc…) creating such technology for ‘defense’ and then having it used against its creators in the future. Perhaps there’s computer malware that mutates on its own through AI and becomes impossible for anti-virus programmers to keep up with.
“Computer, if you don’t open that exit hatch this moment, I shall go straight to your major data banks with a very large axe and give you a reprogramming you’ll never forget. Is that clear?”
“Emulating psychology” was supposed to mean “emulating functionality at a level higher than biology”.
“Emulating physiology” was supposed to mean emulating the neurons and low level details.
“Functionality” angle:
Visual processing and object recognition happen early, prior to higher level processing, so this is a prime candidate to solve the functional aspects of the problem without trying to copy the exact mechanics of the brain. In this case we can create our own version of neural networks (or HTM’s or whatever we decide to use) as long as we get substantially the same result, and then that can be fed into the other functional processing areas that are also being emulated by appropriate solutions/mechanisms.
As research continues, it seems pretty clear that there are a bunch of functional specializations in the brain with different types of neurons and different types of interconnections that are designed to solve specific types of problems. This implies that a functional approach could reasonably emulate human thought processes.
Note: this is not to say we could emulate a specific individual, just that we could create a system that would be reasonably similar to humans in general at a functional level.