When I said moving closer to the poles I didn’t mean anything that extreme. A six degree shift would cause most of the tropics and subtropics, relatively, uninhabitable (see image below). That’s what I mean by shifting people towards the poles. All the people living in those areas, which is billions of people, would be moving North/South (mainly North obviously as there is more land mass to move to) towards the current temperate and frigid zones. The death toll would be catastrophic, but nowhere near existential.
Maybe I missed it but I’m surprised nobody’s suggested that a large change in solar activity would do for us. We think we know about our star, but almost everything we know is based on our observations on how it has behaved during the extremely short time we’ve been looking. Admittedly we have countless other examples of stars to look at to reassure ourselves, and scientists seem to have a fairly good grip on the cycles of the sun, but if the sun dramatically entered into hyper active period (such as a Solar Superstorm) or if its output was somehow reduced that might be very problematic.
Talk about a huge moon-sized object coming in from outside the solar system to impact the Earth is interesting but a large extra solar object wouldn’t have to bullseye our planet. A large enough object could come close enough to us to eject Earth out of its orbit - or, more likely, disrupt the solar system bringing about a heavy bombardment period where deadly astroids are pinging about in all directions and numerous planets are getting flung out of their orbits and interacting with each other. Even if we don’t get thrown out of our orbit we might not survive the very large amounts of regular astroid impacts.
A close enough gamma ray burst aimed directly at Earth would also have devastating consequences. These come out of supernova and long enough and close enough burst could strip the Earth of its o-zone layer, combine large amounts of oxygen and nitrogen in the atmosphere into nitrogen dioxide. But all this is unlikely to be existential - but could be if it occurred on top of another crisis.
There’s been a corner of my head worried for quite a few years now that we’ll accidentally kill off something tiny in the ocean that we haven’t even noticed, let alone named; but that will turn out to have been crucial to the food chains, and whose disappearance will cause the phytoplankton population to collapse, screwing both the food chain and the oxygen production capacity all the way up. We might well die without ever having found out just what went wrong.
Healthy ecosystems generally have a lot of resilience so that the destruction of any one member of them doesn’t have that sort of effect. But that’s healthy systems. We don’t know how much we’ve already diminished that capacity.
You might be missing what I mean. AI can transcend programming because it can learn. We are already developing primitive ‘thinking machines’ which can learn. Stephen Hawking suggested AI could end mankind. So did Frank Herbert.
A few problems here.
No, they cannot. There are two rough broad branches to Artificial Intelligence. The first is Artificial Intelligence (AI) itself and the other is Computational Intelligence (CI), with some arguing that CI is it is own thing.
The goal of AI is to produce algorithms that exhibit reasoning and in particular, a focus on creating machines that can do things that humans can do. The goal of CI is more purely focused on solving problems using AI/CI algorithms (my own research falls more into this branch). Machine learning is a common subbranch of both areas that is very popular and where many people confuse AI/CI with thinking because it creates “machines that learn”. Machine learning does not really create a learning machine as humans think about learning, and it certainly does not transcend programming since much of the intelligence comes from the human designer. The focus of machine learning is to have a program that can find solutions to problems without being told an explicit methodology for arriving at the solution. In other words, learn a solution, but in this case, learn has more in common with “search” than traditional learning (it varies a bit depending on whether we’re talking about supervised or unsupervised learning but this is already getting long enough). The main point here is that the end goal, the ways to measure those goals, the ways of encoding the space, all comes from humans. The machine is simply exploring that space to find a solution. So again, this does not transcend programming.
No, we really aren’t. We are developing machines that can be trained to perform specific tasks. No thinking required.
Stephen Hawking was a very good physicist, but he was not an expert in artificial intelligence. Second, he was referring to the future of AI. And not the state of AI as it exists. He was saying that the development of a general artificial superintelligence could be disasterous. As I’ve said, we are not currently on the path to this. There is no known algorithm that can do this. There’s no known approach to even get there (with some scholar recently saying it might be impossible). There are innumerable scientific issues (and possible other technical issues) with developing such an AI.
So, at this time, AI is not remotely an existential threat to humanity. If we want to imagine some hypothetical future threat, then that’s fine, but it really open the possibility to innumerable flights of fancy.
Quantum machine learning is the integration of quantum algorithms within machine learning programs.[1][2][3][4][5][6][7]The most common use of the term refers to machine learning algorithms for the analysis of classical data executed on a quantum computer, i.e. quantum-enhanced machine learning .[8][9][10][11]
Copy and pasting something from Wikipedia that you don’t really understand is not a compelling argument. There is nothing in quantum machine learning that overturns anything I’ve posted above. You cannot simply add the term quantum in from of something and say “Aha, this will solve everything!” even if it sounds cool, and super hi-tech.
So, for one thing, we don’t really have capable quantum computers. Quantum-enhanced algorithms are still just the classical algorithms that are more generally more efficient, but it doesn’t magically turn them into thinking machines.
Why don’t you think I understand this?
Follow some of the links on the wiki page. Quantum-enhanced ‘thinking machines’ might evolve beyond their initial programming. That’s what Stephen Hawking, and Frank Herbert are concerned about.
If you understood it, then you wouldn’t have posted it because you would no how wrong it is.
The main focus of quantum-enhanced machine learning is finding ways to get the classical AI algorithms to run in the unique environment of a quantum computer.
By the way, I don’t know how to break this to you, but Frank Herbert was a science-fiction author.
So again, if you want to invite things such as a hypothetical AI that might never exist, and there is no certainty can exist, might be a threat to humanity, well, then that’s fine, but I would call it much of an existential threat to humanity in the here and now, or even the near future.
You may as well say that Mother Nature may modify the wind to cause humans to commit suicide.
@EastUmpquaq, I believe @BeepKillBeep is an AI researcher. Not someone you’re going to win an argument against about AI.
My own semi-trite take on the matter is that for any foreseeable future, AI is no more and no less a threat than any other dependence on computers in business, commerce, and society. The key word being “dependence”. We are long past the point that computational decision-making can be “turned off” without jeopardizing our whole socioeconomic infrastructure. Anyone who has ever had a customer service agent tell them that “I’d like to give you this discount” or “I’d like to do {whatever thing} for you”, “but the computer won’t let me” is getting a micro-glimpse of this phenomenon. It’s not about AI suddenly becoming evil or domineering, it’s just about these systems becoming entrenched in everything we do.
I know. But Stephen Hawking isn’t.
I appreciate that but it is definitely possible to win an argument with me in AI. The literature is vast and there is far more that I do not know than I know. Quantum-machine learning is not my area of specialization, but I know bits and pieces of it from reading some of the literature. If someone can present a good paper that say I’m wrong, then, well frankly, I’d be thrilled because that would mean I learnt something, which always makes for a good day.
(Technically, I’m more of an applications of AI researcher since my work rarely advances the theory of AI itself with the exception of one of my papers on automatic algorithm inference)
Stephen Hawking was a physicist. Not a specialist in artificial intelligence. And again, and this will be last time I say this because I don’t do circles online, Hawking was not talking about the current state of AI. He was talking about a hypothetical future super artificial intelligence. You will note that he never posited that such a thing might happen, was probable to happen, was on the verge of happening, rather he said IF it happens it could be calamitous.
I’m not really trying to win an argument, but it would be really cool to get some insight on the future of quantum computing from an AI researcher.
I have little to offer in terms of insight beyond the state of the literature as I understand it. My understanding is that the current focus is on producing stable, capable quantum computers. The state of quantum machine learning is determining how to adjust the classical algorithms to run in the unique environment of a quantum computer.
Many futurologists would argue that the exponential growth of processing power means that AI are closer to reality than we might imagine. Today it might seem like it’s in the distant future and then one morning very soon along comes an AI that’s as smart as a five year old and we all think that’s really cool. By the afternoon it’s too smart for us to control or understand.
Most people don’t take AI seriously because they fail to get their head around the power of exponential growth.
@thorny_locust I’m not a marine biologist nor a climate change scientist, but my moderate-ass guess is that these food chains have already been studied in considerable detail, and we can see the end coming like a slow-motion train crash, but there is no way we can get enough people/countries to cooperate to reverse it, until it is too late.
Additionally, as brilliant as he was, Stephen Hawking was wrong about some important things in his own field, let alone things outside of it. His fame was at least partly due to his popularization of cosmology as much as to original research. And we are littered with the detritus of scientists making proclamations outside their areas of expertise – e.g.- the late Freeman Dyson’s groundless critiques of climate modeling.
That scenario is only possible if we are throwing that ridiculous amount of processing power toward AI that is good at designing better AI, and that we actually have the raw processing power required for a robust AI, and that the limitations of the AI we’ve designed allow for it to generalize itself. None of which are impossible, but unlikely to be achieved in a day.
Exponential growth of computing sounds great until you consider the issues surrounding general intelligence (let alone super intelligence). We have no known algorithm for providing general intelligence. Even from a theoretical point of view, it is not entirely clear how to formulate a general problem-solving in machine-readable form. And then there is issue around the curse of dimensionality, i.e., the more that needs to be considered the exponentially faster the problem space grows. Here’s the thing, we don’t really understand why we are intelligent. Psychology and neuroscience has been peeling that back little by little to understand how human thinking processes work, but it isn’t very well understood. Emotional memory seems to be a factor, which might be an issue for an unfeeling machine. And before someone says “We can simulate emotions!”, well, that’s great, except it is unclear if that would be sufficient. And look, I believe we will someday create a general artificial intelligence (I putter away at a process towards this in my spare time), but in a discussion on existential threats to humanity… I’d put my bets on giant space rocks or super viruses before super AI at least for the foreseeable future. There are too many unknowns.