Hey, how did you get into my lab and read my research notes?
I mean … uhhh … I’m definitely not working on this.
Hey, how did you get into my lab and read my research notes?
I mean … uhhh … I’m definitely not working on this.
I agree with every word of that. As you alluded to earlier though. A dumb AI can still cause some serious problems, even if it’s unlikely to bring about the death of all humans.
In many ways, I think we already have a fairly good model for what this might look like. You can think of a corporation as a kind of profit-seeking AI with human components. No individual is even aware of, let alone in control of, all the thousands of micro-decisions being made moment to moment by employees at every level. If unrestrained, it simply pursues it’s goal without regard to externalities like human suffering or environmental degradation. Like a poorly thought out algorithm, it will cut corners, game the system, adjust values incidental to its primary reward function to extremes. Perhaps, as automation gradually makes an increasing percentage of the decisions in every department, the transition to an AI-dominated future will be smoothly continuous. Just large companies pursuing profit and avoiding regulation with ever improving efficiency.
AI is a huge threat, specifically because it can get smart enough to get out of its container. They don’t have to want to kill us, just want to continue doing what they were programmed to do, and see humans as trying to stop that, and thus getting rid of them helps with its goal. It’s a real problem, and Hawking is among many in the AI world who talk about this.
Climate Change is another possible runaway problem, but we do have technology on our side to maybe handle it. For AI, we won’t, since the AI will be about to outsmart the tech of anything but a smarter AI.
Nuclear weapons are not technically a runaway problem, but close enough with as big as the arsenal is. There isn’t any way for tech to stop them, but at least most of the world realizes how devastating it will be. Everyone actually takes this seriously, to the point that they may even refuse to respond in kind. Or at least agree upon a limit.
Disease is a problem due to evolution. But the most likely scenario there doesn’t wipe out humanity, but just sets it back a few centuries, with bugs that can’t be killed. And medical tech is really, really booming, so it’s likely that we wouldn’t have to wait as long for new antibiotics. And the people who deal with the worst diseases take it seriously. This is, in my mind, the least likely.
Of the rest, the only one that has a chance is an meteoroid collision. However, the ones big enough to cause a problem are the most easy to detect, and I think we could easily mount a defense if we catch it with even just months to spare. It’s also just quite rare, and only an issue at the level of thousands of years, if not more. The only reason it becomes at all likely is just by default if everything else fails. (You know, the same way cancer kills so much more of us because we live longer.)
I think much of the debate is about timescales. If Hawkins said “sometime in the distant future this might be a problem”, I think most of the skeptics would probably agree. When he leaves it open though, the sensationalism kicks in and people hear “any day now”. As others have said, those working on AI currently, understand that the technology is still incredibly primitive.
Antibiotics won’t help much against a virus. Imagine something something like HIV only as infectious as flu. Maybe not a true existential threat, but potentially bad enough to completely destroy civilisation as we know it.
True, but the big threat right now is more bacterial, with antibiotic resistance. A viral problem is less of a threat, in my opinion. Sorry if I made it sound like I thought antibiotics would help with a viral threat.
I mean, I put antibiotic resistance a short term problem, while a viral outbreak that’s really bad and can’t be contained by quarantine procedures as a medium term problem.
By “get smart enough” don’t you really mean “be made smart enough”?
Computers are already unbelievably powerful. Making them truly self aware seems like extra unnecessary work that is beyond our current capabilities.
You didn’t. I was just elaborating on your point.
Still, as you said, I think any disease is unlikely to kill all humans on it’s own. If nothing else, there are too many isolated populations on islands and in rainforests. Even if it was carried by birds, I can’t see it getting absolutely everyone. It could be a devastating part of a one-two punch scenario though.
Possibly not, a common theme of AI doomsday scenarios, is the idea of recursive self-improvement leading to an “intelligence explosion”.
civilization will be taken out by a flu pandemic.
(you can read the entire book Bird Flu at the link)
It isn’t a real problem and Hawking is not in the AI world. I, on the other hand, am in the AI world and nobody in the AI world is talking about what you’re talking about in a serious way because it is a total non-issue at this time. As I stated above, some authors in the AI world, are talking about how we keep our rather primitive AIs from acting against human interests out of ignorance, but not in the way you suggest.
That’s an exaggeration. While I lean towards your point of view, I wouldn’t claim that all the experts agree with me, because surveys suggest that a significant percentage of them don’t.
And again, I’d want to stress the timescale point. As far as I’m concerned, If AI poses existential threats over the next couple of centuries, that is a serious problem worth discussing. The state of technology today is more or less irrelevant as far as I can see. In some ways, I suspect that people closest to the cutting edge may have their expectations of future progress unduly influenced by current capabilities. Current neural networks are glorified spreadsheets. Each “neuron” is a cartoon, bearing almost no resemblance to anything in a biological brain.
In a future where computers may be hundreds of billions of times faster, it might be possible to give each individual neuron the power of a modern super-computer, and to evaluate millions of small networks automatically, scaling up those which seem promising. While it’s true that the human brain is hugely more complex than anything we can imagine building today, the significance of that fact may be overestimated.
Brains evolved over hundreds of millions of years, but there were significant limiting factors. Brains have to fit inside skulls, they have to operate on a tiny trickle of energy, and have to justify, at every stage of evolution, any increase in power consumption with a corresponding benefit to reproductive fitness. They are built by the meandering, painfully wasteful mechanism of natural selection, which is not even trying to produce intelligence and is just as likely to reduce brain volume as increase it. As always, it can never go back to the drawing board, or reintroduce old ideas in novel combinations, it just tweaks and refines, building incrementally on whats already there. It tends to increase complexity because it doesn’t “care” how complex something gets, not necessarily because that complexity is needed.
Machines are not bound by such constraints. They can be built to be as large and resource-hungry as a city. They do not have to heal themselves, they do not have to devote much of their power to regulating bodily functions etc. Additionally, much of the structure of the brain is highly repetitive. There doesn’t seem to be any difference between structures processing visual input and those processing auditory input. It looks like it’s using certain multi-purpose learning algorithms over and over. Algorithms don’t necessarily have to be “designed”, they are out there to be discovered. Biology found some pretty neat ones without trying. It may be a question of how much we can speed up the search.
While I’m skeptical of some of the more optimistic projections about AI development, and fairly agnostic as to whether we will ever achieve strong AI, I have never heard a good argument that it is impossible. In fact what limited indications we have, seem to point to it being entirely possible.
Sure, it is a absolute statement, allow me to amend it slightly. I’m not aware of any serious AI researcher who is talking about this in a serious fashion. I’ve not seen one paper on it in any reputable AI journal, not one presentation at any high quality conference. Of course, I’m not intimately aware of everything going on in the AI research domain, but I would like to think that if this is a major issue under consideration that I might have heard something. And even if there are one or two people looking at this in some hypothetical way, which I doubt because how could they, it isn’t considered a major issue in the AI community. At best it is some group’s small little niche project.
I didn’t say impossible. I’ve been quite clear that I’m not going to make any prediction on when such an AI might exist. But it isn’t soon. And we’re already looking at how AIs can be opposed to human interests in a non-intentional way. My claim is only that given how primitive our AI capabilities are right now, and we’re already looking at the problem with the proper scope it deserves that as AI advances these things will be investigated and resolved in due course.
Let me put it another way, the idea that somebody somewhere at some time is going to suddenly turn on an AI and it is going to be so surprisingly intelligent that it can escape its box and develop its own capabilities at super speed, and develop some mechanism to work against humanity, and without any safeguards to keep it from working against humanity (or is able to disable those safeguards) is vanishingly unlikely. We’re talking about computer systems and algorithms that don’t work at all like any computer system today or any predicted computer system. So yes, if we want to imagine a threat, well we can imagine a lot of things that have no basis in reality. But this thread is about how realistic is a threat. There’s nothing at all, either real or hypothetical, that suggests that AI could be such a hazard.
No, that part wasn’t really aimed at you. My whole post was really more a response to some of the general criticisms of the whole concept of AI.
I don’t know whether I agree with you that “it isn’t soon”, because that may mean very different things to us. Certainly, I think some of the reaction to Hawking’s statements are not entirely fair. The quotes I’ve heard from him seem to be full of “if” and “could” and “perhaps” type statements. He makes a clear distinction between short term risks, such as economic disruption, and long term ones such as losing control of a hypothetical superhuman AI. The closest I’ve heard him come to putting a concrete timescale on the arrival of AI, was (if I remember rightly) about 70+ years. That’s much sooner than I would expect, but it’s not completely outlandish and absurd, and it’s not even clear what degree of AI e’s talking about. There really do appear to be a number of AI experts who would endorse that sort of timeline, though I suspect that this sometimes has more to do with the benefit they might derive from creating AI hype.
As for Hawking’s lack of expertise in the field, as I’ve said, that may not be particularly significant. Nobody on Earth has any expertise in the technology of 70 years from now. Experts in the technology of today may even be actively mislead by naive attempts to extrapolate. True, AI systems of today bear no resemblance to anything which might plausibly pose a threat, but nor do I expect them to bear much resemblance to anything we are still using in 70 years. That’s a seriously long time at today’s rate of advance. My dad is not yet 70, and he grew up in a house without electricity. As an adult, he had a job which involved access to the “company computer”, it took up a whole floor of the building, took punched cards, and had significantly less power than my cheap, second hand phone.
I don’t automatically share you optimism about solving AI problems as they loom closer to reality. Given that we a single mistake could be incredibly costly, and given that some of these problems may not even have good solutions. I think it does no harm at all to start speculating about some of the potential hazards ahead. I’s important to plan ahead, even though circumstances will inevitably make nonsense of the majority of your plans.
I don’t hear many people arguing this point. However, there is another potentially significant difference between evolution’s approach to brain construction and an intelligent designer’s approach.
I would speculate that the most important difference in the cognitive capabilities of a mouse and a human, are to do with scale. Evolution cannot decide one day to make a mouses brain a million times larger, or add an extra cortical layer, or dramatically increase the density of connections. No doubt there are other differences contributing, but it’s at least conceivable that were this possible, it would have completely removed the highly improbable sequence of selection pressures and complementary mutations required to make such an advance. The “progress” made over tens of millions of years, might have been possible in a negligible fraction of that time.
A human engineer may indeed take equivalent decisions. While the idea of a surprisingly capable intelligence, springing forth from nowhere, I would argue is something a caricature, it may not be as radically distinct from a potential reality as it first seems. Build something with the brain of a worm, scale it up by a factor of a few billion, and who knows what might result?
It may very well be that I’m too close to the problem. I know just how primitive our best AIs really are. They can do some impressive things but they’re not very smart. I don’t think we’re a hop, skip and a jump away from strong AI. We need to make a major shift in how we think about computation before we’ll achieve strong AI in my view. I think we’re only starting to just barely starting to see the concepts that might lead to the systems we’ll need for strong AI.
I don’t think anyone who knows anything about AI is worried that we’re just a hop, skip and jump away from strong AI.
The problem is that it could easily be that when we first discover we’ve created a strong AI it will come as a big surprise, because that wasn’t the intent of the mundane quantitative improvement they just implemented.
On the other hand, of course the history of AI speculation has predicted this for literally decades. “Oh, if we just had 100 times the computing power we have now, then we’re on a different planet”. And we’ve passed that 100x milestone several times since the 1950s, and we’ve got systems that are better than Eliza but sure aren’t Hal 9000. So just piling more transistors into a heap surely isn’t the answer, like was naively speculated back in the olden days.
To me, that seems very unlikely. Strong AI is not going to be achieved by some mundane improvement. It will be achieved after a radical shift in computing, at least in my view.
So how about those asteroid strikes?
I agree, and don’t even get me started on “mind uploading”. That seems about as realistic to me as colonising the surface of the sun.
Yes, current computers are stupid. When you ask them questions, you generally get stupid answers. If all you do is make them a million times faster, you’ll still get stupid answers, only really quickly.