AI has no need to directly kill even a single human to produce widespread human misery and dislocation.
The potential is quite real for AI (even weak AI, not true AGI) to produce major economic dislocation over a very short timeframe that is beyond human societies’ capability to cope. When added to the internal political, geo-political, and climatic stressors already present in much of the world, this could well be a civilization-breaking straw on our collective backs.
Humans who are paying attention tend to overvalue novel risks. Humans who are not paying attention tend to say “Risks. What risks?”
I may be one of the former. But with due respect, you’re vying for poster child of the latter. That seems … incautious.
I just lack the imagination to see how to eliminate the “harm humans” problem at all. If you make “don’t harm humans” the program’s highest priority, I can easily envision useless programs that decide it’s safest to do nothing. I’m agreeing with you if that’s not clear.
And as has been pointed out, the AI needn’t be sentient or self-aware or wise. It can just be a ruthlessly efficient “dumb” program, dumb in the sense that it’s unconcerned and ignorant of details unrelated to its mission. I keep seeing Lucas asserting that surely an advanced AI would possess this type of wisdom. That’s not only baseless, current AI design makes this enormously unlikely.
I’m pretty sure there was a successful attempt not too long ago at creating a racist AI.
I expect there will be many AIs since they are only machines. Seems like someone will simply program one to destroy all other AIs if possible, then destroy the owner’s enemies. Garbage in, garbage out.
Possibly. One argument is that whenever technology has supplanted human labour, the job market has adjusted, creating new jobs that are usually better than the ones obsolesced. The other argument is that we’re no longer talking about supplanting human physical activity, we’re talking about the final frontier, cognition and intelligence.
If machines are better than we are at most of this, there won’t be much of value left for us to do in terms of commercial productivity. What this portends for us is anybody’s guess. An optimistic take on it is that maybe the Great Leisure Society that was predicted back in the 60s but never happened will finally be upon us. What if all the menial workers and even many knowledge workers were robots, and we had an institutionalized system for distributing the wealth that was now being created almost for free?
A leaked report from Google. The gist is “other AI companies are kicking our ass but that don’t matter because open source is gonna kick all our asses”.
The former is virtually guaranteed to happen. The latter is virtually guaranteed to not happen. Imagine an entire planet that economically resembles present-day Uganda or the Philippines. That’s the next destination arriving at super-express speed.
Might a revolution a couple hundred years hence reorient society in a more egalitarian direction? Perhaps.
The fundamental concept that you do not seem to grasp is that we do not “program” an AI, and especially not an AGI which is by definition an evolving, emergent, and autonomous intelligence. It learns based upon a data set that is presented to it, and we can do some level of reinforcement, but the idea that we can integrate any kind of directives like Asimov’s Laws of Robotics (that the system will then interpret as it sees fit) is incorrect.
Of course I grasp that, I’m using short-hand like everyone else is. This is a semantics argument. How is ‘programmed’ substantively different from Magetout’s ‘objective that it is trying to achieve’ or ‘is configured to try to optimise solutions to the objective’? Or your ‘data set that is presented to it’?
But again, my argument is with the notion that any unanticipated solution an AI will work out will lead to killing humans.
You simply can’t know that. We have not come up with a working AGI yet so we therefore can’t know, much less imagine, what directives can or can’t be implemented into it. Humans have an amazing capacity for working around difficult technological problems. My opinion is, that it’s shortsighted to assume that we can’t build a computer intelligence we can’t control.
Maybe we can’t, but we have a decent track record so far.
Quite. If you make it ‘don’t harm humans, nor allow humans to be harmed’, then you’ve got a recipe for a thing that will try to eliminate smoking, any kind of dangerous sport, refined sugar, alcohol, etc.
And you’re right about the priority thing. If you make something a priority, it will be done in preference to other things. I wonder if the problem is actually that we tend to have a much more fuzzy and fluid idea of ‘priority’ than what the concept of priority actually means, thus it’s too easy to imagine an intelligent machine will behave in a way similar to a human, like to assume of course it won’t roll right over a baby, because a rational human wouldn’t.
I suppose I should make it clear that I don’t think harm to humans is an inevitable outcome of unleashing AGI on the world, I just think it’s a likely one, based on the direction that AI research is currently heading, and in the specific methods being implemented.
It is, I believe, probably technically possible to create an artificial mind that would be a very close analogue to the developing mind of a human child, and then ‘raise’ it in a similar way to raising a a human child, and have it acquire a state of mind that is somewhat like that of an adult human, complete with a human-like grasp of the weird fluid nuanced unquantifiable idea of ‘what we want’ . I suppose its worth acknowledging that we don’t always succeed that task with human children without the occasional human child going rogue and killing people, but perhaps the parenting technique coukd be perfected.
But that’s not what corporations are building right now.
I return to the point that the concern IMO is not really building something that turns out to be simply a psychopathic human-killing machine or the proverbial “convert the planet into paperclips” automaton that ignores all side consequences in its utterly single-minded pursuit of paperclips.
Far more likely is that the widespread deployment of AI will have fast-moving social and economic consequences that will cause humans to create plenty of human-killing violence. And the same social and economic consequences will cause widespread famine, giant shanty towns of beggars in place of suburbia and all the other tropes of a vastly unequal dystopia.
That is the real risk we face. The “paperclips” canard is a simplified thought experiment like chaos theory’s “butterfly effect” or Schrodinger’s cat. It’s meant to introduce a deep concept in “lies to children” form. It’s a severe mistake to then reason from that “lies” form, not the real underlying form of the argument.
A bit like with AGW, we can adapt society to rising seas, more extreme weather, and warming temps. Over a timescale of millenia. Doing the same adaptation in <100 years will be vastly more challenging versus our relatively limited resilience to exogenous disruption. AI and perhaps AGI will crash over our society more like a 30 meter tsunami rather than a 1 meter/century sea level rise sustained over 3 millennia.
The hard part about the coming AI impact is that unlike the tsunamit here won’t be an undamaged interior & hinterland to retreat to, nor to provide the cultural inertia and practical resources for rebuilding. It’s gonna arrive pretty much every where all at once leaving no refuge or undamaged area in sight.
Yes, I’m far more worried about what people will do with AI than what AI will do itself. Some day, AI may advance to the point where it looks at all these organic parasites and decides to wipe them out, but a far sooner day is when people like Musk see all these organic parasites and decides to use AI to wipe them out.
Yes, I think you’re right, that people misunderstand what “priority” and “reward” mean in the context of AI, and assume it will just work how it does with us humans. “Just make its highest priority to ensure that humans not be harmed.” A powerful enough AI could do terrible things with such a directive.
We have dumb computers now that can easily perform their primary functions without defaulting to some secondary directive.
Can you give an example of how your above example would work in real life? Im getting a lot of pushback in this thread about how AIs will be autonomous and we won’t be able to direct how they think, but I’m failing to see how building a computer that can’t even stay on its main objective would benefit us.
So to make sure we are comparing apples to apples, give an example of an AI that would be designed for one thing and defaulted to eliminating cigarettes from the world.
An AGI is highly autonomous by definition; the benefit of a general capability AI is that it does not require detailed prompting or supervision to perform expansive operations. You keep insisting that because no one can provide ‘facts’ (i.e. direct evidence from experience) that all speculation about harms is unfounded and that until there is a demonstrable harm there is no justification for slowing ‘progress’ to consider the implications.
“Programming” is not a “semantics argument”; we will not be programming an AGI in any way analogous to the discrete programming that we do with ‘dumb’ systems, and your persistent belief in the notion that we can somehow deterministically insert explicit safeguards a system that is so complex we cannot understand how its heuristically-defined patterns result in behavior and that is specifically designed to be self-learning and resolve problems and conflicts with little or no human oversight is more unfounded than any speculation about the harms these systems could do once they are placed into control of real world infrastructure and functional hardware, or even just relied upon for critical guidance in fields like law, medicine, engineering, et cetera.
It would be wise to be cautions about widescale adoption of AI—even just primitive generative systems with no control authority—because of the power and influence they will certainly have on human society and industry. And the adoption of more powerful, completely autonomous AI that isn’t constrained to responding to prompts and cannot be easily (if at all) shut down or removed from control should be a step that we should be highly circumspect about it, because once it is undertaken there will be virtually no way to roll it back just short of completely dismantling industrial society. But because people are in a hurry to find some fairy dust that will somehow magically resolve the problems we’ve created previously, or just because they are enthralled with the notion of producing ‘art’ and doing work without putting in any real effort, we’re rushing headlong into a potentially perilous loss of our own control and autonomy, and the few people (many of them who are, despite your protestations, actually experts in the field) are being pushed to the side to be ignored as hyperbolic doomsayers rather than giving their well-established concerns due consideration.