Is Harari right about AI being mostly negative?

If I need money, and I decide that the most efficient way to get it is to shoot you and take your money, that’s evil. You don’t have to be malicious to do evil; more often than not, you just have to be logical.

I’ve never understood the difference between amorality and evil, if the end result is the same.

I don’t disagree, it’s just that we’re talking about a different kind of intelligence here - one that may be (in some sense) innocently amoral.

Consider:

  1. Evil is the absence of good.

  2. An AI can be neither intentionally evil nor intentionally good.

  3. However, as evil is defined solely on the basis of whether or not good is present, the fact that an AI cannot be intentionally evil is irrelevant; the only thing that matters is that it cannot be intentionally good.

  4. Therefore, AI is evil.

BTW: it’s “Harari”, not “Hurari.”

As an aside, while I don’t think I’ve met Yuval myself, his mom briefly worked for my dad as an office manager.

Either I’m misunderstanding, or that seems like an incredibly un-useful definition of “evil”.

Surely there are countless of objects and actions for which the descriptor “morally good” is inappropriate.

Is the rock over there good? What about this morning when I choose to wear one shirt instead of another? Was my choice morally good?

I think more concerning and more likely than a world run by some malevolent super-AI is a world run by thousands of relatively dumb AI that replace most human interaction and decision-making. Not that they would be particularly “evil” or “amoral”. But it would create a world that is very isolating and dehumanizing.

Well I think one of the problems is that to an AI there is no such thing as morally good or bad. Using a rock as a paperweight is no different from using the same rock to bash a baby seal and make a shirt out of it. And unlike a (non-psychotic) human, an AI doesn’t feel any sort of guilt or regret or social stigma or other consequences.

Which just reiterates my previous concerns about the isolating and dehumanizing aspects of AI. Technology makes things convenient and there are plenty of examples of work where there is no downside to automating it. But unlike a human, an AI doesn’t care about making my day a bit better or whether I hurt its “feelings”.

So what you get is a sort of “Idiocracy” society where most people just go about their day doing whatever it is they do “because computer said so”. Amusing themselves with meaningly entertainments. Not really knowing how to interact with other people because most of their day to day interactions are with bots of various kinds. Or their human to human interactions are very superficial and transactional because that’s how they are on apps.

Long term such a society might becomes “Matrix-like” where everyone lives in a world of AI-generated bullshit, created to influence their behavior and spending patterns. It might even encounter “paperclip problems” or other apocalyptic thought experiments where some emergent behavior causes these interrelated systems to devote dangerously disproportionate amount of resources to non-socially productive activities like paperclip manufacturing, cryptocurrency mining, or extracting human spines for cancer cures.

I think this is an interesting debate, although I suspect it could turn into a philosophy hijack that would perhaps be better pursued in its own thread. I think there is a distinction between:

  • An entity that is following motives that are basically driven by unconstrained pursuit of efficiency, and does bad things as a simple outcome of that course of action
  • An entity whose motive is to purposely subvert any request so as to bring about an undesirable outcome

The latter kind, although popular in comic books and hypotheticals about perverse genies, perhaps isn’t all that common in actual reality, at least not in the extremely powerful form, but we do see traces of it in human nature, where people with, say, managerial power exert it in a way to enact deliberate harm on others, often in a way that is not consistent with the efficient attainment of any of their objectives.

Regardless, the distinction I’m trying to make is important in this discussion because there is a tendency for people to handwave AI safety concerns as ‘doomsaying based on the SF trope of the evil robot’ - where ‘evil’ in that case is the purposeful, bad-for-the-sake-of-being-the-bad-guy type, but this is generally not the basis for the safety concerns being voiced.

I think it’s more that certain varieties of shortsightedness often lead to “evil”, or a good facsimile thereof. In this hypothetical example, the shortsightedness is of an AI that doesn’t understand the context it’s supposed to be making decisions in, and so makes logically correct proposals that fundamentally miss the point. The “most efficient” route to a solution to any reasonably complicated problem is almost always going to be “evil” just because it’s going to bulldoze its way directly from point A to point B with no consideration of the damage done. Just as if you had a literal robot bulldozer that went from point A to point B with no consideration of what got run over or smashed though.

The difference being that a human is actually capable of understanding that context, and is ignoring it out of irrationality or malice. Which is where the “actually evil” part comes in, the AI is too stupid to be evil. Dangerous yes, evil no.

Agreed - and that’s the point that eludes a lot of people engaging in the debate who don’t necessarily understand the nature of the technology; a frequent argument is ‘Well, why would we assume it’s going to be evil if we don’t explicitly program it to be evil?’ - which completely misses the mark, because what we actually have to do is to try to program (or condition) it to avoid being accidentally evil, in ways that are ‘obvious common sense’ to a normal, well-adjusted human adult with a lifetime of social conditioning.

Exactly, and hence my statement about evil being the absence of good. An AI acting in a purely rational manner will eventually create circumstances that, if they were created by a human being, we would undoubtedly consider “evil”. Making sure that an AI is not “intentionally evil” is not enough - we have to find a way to make it “intentionally good”, if we can figure out what that even means.

Human beings are also capable of ignoring the context out of indifference. In fact, I’d call that more common. Take, for example, the transatlantic slave trade. The people who took part in that (we were inarguably evil) weren’t being irrational, and they didn’t hate African people - like an AI, they simply didn’t care. In a way, that’s even worse than malice.

A slightly sneering tone towards regulations, but isn’t that the main thing being advocated here?

I mean, the only alternatives to government getting involved that I can think of are:

  • Industry self-polices : pretty much never works for anything non-trivial
  • Public boycotts of companies using AI dangerously / irresponsibly : but how will we know?
  • Just everyone hears about this and similar articles, and we’re all individually convinced to never allow AI to do X, Y or Z : not realistic

Yes. I think rules are required. There is no sneering on my part. But these are necessarily vague, and in general politicians often seem not to understand technical issues nor move quickly to address them. Tech companies often seem to feel superior to politicians, i.e. “Senator, we [Facebook] are an advertising company.”

California rules? Odd to have Musk supporting this, or Pelosi not?