Put a value on a human life

So as I watching something about A.I. robots, it made me think about
how much one would cost and then how much more or less value would
it have in compared to a human being.

Obviously the answer is that a robot/machine could never be more valuable
(intrinsically) than a real human being (the moral answer). But let’s not kid ourselves.
We all know that one day when A.I. robots are perfected and become 10x more capable
than a human being, the price tag on an AI robot will be more than a human being (unpopular/
unethical answer).

So that leads me then to ask the question, what price tag/value would you put on a human being life? Is a human being worth $10 million? $100 million? $10 billion? Now obviously this answer would differ for each individual because we value certain people more than others. Example, you would place a much higher price tag on your own mother than you would some bum on the streets.

But without indicating which human life it is and what relation it has to you, set a price / value for the average human life.

It’s a meaningless question without more context.

Am I trying to do an ROI on employing a robot to replace a bunch of human workers?

Am I calculating damages from a self driving car running over someone’s mom (or a street bum)?

Am I trying to determine if I should deploy a robot to save a child in a burning building?

Is my self-driving car trying to calculate between running over a bum to avoid crashing into a car full of promising college seniors?

Value to whom?

Interesting question but IMO bad comparison.

There’s no ethics involved in an AI/Robot life. The price is like what would you pay for a car/ depends on what kind of car you want and how much you can afford. When you’re sick of it you can sell it or junk it for scrap.

A human is different. Ethically and legally a Human life has no price. Slavery is illegal so you can’t buy one and killing one is jail time.

There is also the emotional attachment. If you were asked to pay everything you owned to save the life of your child or family member, you’d likely do it but if it was a stranger?

Human life also has different value in different society and in different situations. Armies will send troops in to die without a qualm.

The insurance industry pays liability settlements on deaths all the time. People who earn a lot of money cost the most, followed by healthy young adults. Old people (who lost fewer years of life) tend to be priced lowest. But especially when a jury is involved, there’s an enormous variance in what “a human life” costs.

It used to be that the average was ~$2M. I think it’s been going up recently.

And

Is very true. The precise situation matters a lot.

Technically, that’s not the cost of a human life - it’s the cost of a human death.

Have fun being enslaved by our robot overlords!

Well yes and no. Armies don’t have infinite trained soldiers so there is still a bit of calculus of whether the mission objective is worth the potential casualties…

Value is dependent on context. In a business context, the value may be based on production alone, with no moral component factored in. Example: Human A produces X number of widgets/hour and is therefore worth X-dollars to the company. Robot B produces Y number of widgets/hour and is therefore worth Y-dollars to the company.

In other contexts, value may be based on morality alone, with no production value. Example: your pet cat (a conscious being) may be worth a million dollars to you, but worth zero dollars to someone who doesn’t like cats. A non-conscious pet robo-cat is worth just the cost of replacement, with no morality involved.

Some contexts may involve both production and morality. Example: in a wrongful death legal case, an award may be based on a predetermined value of human life in general plus the anticipated production value of that human over a period of time. Robots (as they are now) have no place in a wrongful death case, but may have production value in other legal cases.

It gets fuzzier if and when robots become conscious (experience qualia). Many animals, including all mammals and birds are conscious, but as far as we know, no AI robots are conscious yet (though I believe they are close).

It gets even more fuzzy if and when robots become self-conscious (the conscious awareness of oneself as distinct from the world outside). Few animals (as far as we know) are self-conscious. Besides humans, some mammals, some birds and some cephalopods are believed to be self-conscious. Someday AI robots may emerge self-consciousness and a higher moral value will come into play.

Certainly self-consciousness has value (to someone) and IMO should be valued higher than consciousness alone. The dollar amount of that value is debatable.

Sentience in robots is hard to truly program in, and not terribly valuable and probably negative value to the owner of the robot.

Watson could beat the Jeopardy champs, but is not sentient. We could presumably build a Dr. Robot which would outcompete all human doctors with its knowledge and skill, then clone Dr. Robot so there were many Dr. Robots. All without sentience. Dr. Robot isn’t going to care when you turn it off. Should it?

I like my sentience. Other sentient beings may be competitors. Highly skilled, intelligent, non-sentient robots aren’t competitors, they can be owned by me and used by me. So they have an advantage in usefulness, TO ME, over other sentient beings with their own interests.

A sentient being who is in danger of not being in the ownership class of sentient beings may have an interest in “converting” the intelligent robots to sentience so as to make them less effective servants. I suppose if they could create a bug or virus that would do so for the robots, maybe they would. It’s not something that is going to be useful to the owners of the robots and the owners will be trying to prevent sentience.

We made the calculation in my university years, mid 80’s: a human life, based on what the government (Western Germany at the time) is ready to spend to prevent one death: between one and five million Deutschmarks. The exchange rate was ~3 DM for one $, so a human life was worth between 330,000 $ and 1.5 million $. You may wish to adjust for inflation, that would trebble or cuadruple the price in almost 40 years.
The examples we looked at were: How much is the government ready to spend to avoid so and so many deaths on a particularly deadly stretch of road? How much is the government ready to force industry to spend to avoid X statistical deaths from pollution/bad safety/injures etc? (Those were the days the government was thinking obout banning leaded petrol and making catalysts mandatory: a big deal in car obsessed Germany). Things like that. I am quoting from memory, evidently, I may be a bit off, and there was quite a spread. I clearly remember thinking that a German human life seemed cheap on average. YMMV.

What kind of price tag are we talking about? Already, you could easily find situations where the price to buy the services of a piece of equipment (e.g. to rent a truck for a few hours) is more than the price to buy the services of a human being (e.g. to pay a worker for a few hours of work).

I’ve read that the aircraft/airline industry values a life at $6 million. That is to say, if a piece of technology would save 100 lives but would cost over $600 million to install fleet-wide, it’s not worth it.

Interesting replies. I like the insurance examples which could either be the price of a human life or the price of a human death as someone put it.

Once again, I’d like to add that ethically/morally, it would be wrong to value a robot/machine’s artificial life over a human life. Morally/ethically, I believe the right answer would be that all human life is equal and it would be wrong to value a certain type of human over another (ie. based on race, education, culture, job/career, skills/talents, etc). It would be like a parent having to rank from most valuable to least their 4 children. No way in heck they’d ever do that. To them, all 4 of their kids are equally valuable no matter what they do with their lives or what personality or skills they have.

But in the business world, we know that when it comes down to money, people will value inanimate objects over human life even if they don’t publicly admit it. It would be a PR nightmare or company assassination if you came out as the CEO and told the world you think your 2 million dollar bugatti’s are worth more than some homeless people’s lives. The truth is they really do believe it and they would not take saving a homeless person’s life over saving one of their bugatti’s.

So in the business world, there is a value / price tag for human life. I agree it will vary based on a lot of factors and the context.

If I was God, and I wanted to do the Abraham Isaac sacrifice faith test, this is what I mean.
Let’s say I took Jeff Bezos or Elon Musk or Mark Zuckerberg and told them they have to sacrifice
and give up their companies (billions worth) to save one small village in africa of about 200 people.
Do you really believe that either Jeff, Elon, or Mark (or anyone else you want to throw in that’s worth billions)…Jay-z even…are going to say, “Yes! I will give up all my companies in order to save the 200 lives”? I’m 100% positive they would not, especially for a people in a small village in africa they know nothing about. They wouldn’t say it outright honestly like that though. They would spin it as some sort of unreasonable business decision that prevents them from making such a decision to destroy their entity they spent decades to build up to it’s worth of billions just to save those ppl in village of africa.

Now of course, if it was presented to them as saving their entire family or their companies, they would save their family.

But we don’t even treat our own lives as “priceless.” Because we all die, sooner or later.

Maybe if we were immortal in this earth lives could be priceless. But we aren’t. We are going to have some unknown number of days in our lives, but the upper bound does exist, and we should live accordingly.

I brought and bring this up all the time with Covid, that the clock is ticking, that we can’t just keep measures going forever without a cost, because we aren’t going to get that time back. I myself have experienced multiple life changes with my loved ones due to health issues that have nothing to do with Covid. If you really think you can keep today going forever, you can’t. It’s going to change whether you want it to or not. Cost calculations are absolutely relevant.

That’s not far off my “I think it was running around $2M, with a lot of variance, but seems to be increasing recently”

Yes, we are in the same ballpark. We are not alone in this thread, I infer that it is a valid point of view.

Yes, and you can try to influence those factors and context. You want the “value / price tag for human life” to be infinite, ie, always higher than a robot’s, no matter how valuable the robot may be, but that is utopical, all you can manage is to rise it. It ill never be infinite, but if it rises high enough to change a decision where one life would have been lost otherwise, that is something one can aim at. Case by case, that would be progress.

Yes Janet, life’s pretty cheap to that type.

Years ago, i was friendly with a guy whose wife developed breast cancer. It turned out they had lousy health insurance. If she tried to treat it, she would bankrupt him. So she didn’t. She chose cheap palliative care. And she died in less than a year.

I don’t know what her odds were, but she decided the odds of survival weren’t worth the cost.

I think a highly developed country like West Germany has more resources and is likely to spend a lot more to prevent deaths than a poor country like Bangladesh.

Anyway, morals aside, there is an abundance of human beings in the world right now, the most there has ever been, and each day more are added. Does the value of any one of them add up to anything at all? AI humanoids are still very rare, and thus expensive. We spend millions on habitat restoration and protection for endangered animals (especially the cute ones), but saving a village of 200 people in Africa is not likely. Does scarcity play into how valuable something is?

If and when AI robots achieve sentience, then they should be afforded the same respect and treatment that we provide sentient animals. All animals with a central nervous system are sentient—it’s a relatively low bar to achieve. Generally, most people don’t want sentient animals to suffer and society has laws against torturing or mistreating them. A sentient robot should have at least this level of protection and care.

Self-awareness (SA) is a subset of sentience and is a much higher bar to achieve. Self-aware species are not only aware of their surroundings, they are also aware of themselves as individuals. They pass the “mirror test.” They are a “who”, not a “what.

Only some mammals, birds, and cephalopods are believed to be self-aware. Generally, we treat (or should treat) SA species with even more care and protection than we do mere sentient creatures. Why? Because they have a greater capacity to suffer and experience loss (and other emotions). Their life matters to them and that should be respected.

The emergence of self-awareness is not well understood. Maybe AI will never achieve SA. But if it does, AI should be treated at least as well as we treat other self-aware species—as individuals, not things.

Ideally, all SA species should be on equal footing with humans. But, we live in an anthropocentric society, so that’s unlikely. If we lived in an octopus society, we probably wouldn’t treat humans all that well either.

SA is not all or nothing, there are degrees of self-awareness. If AI achieves a higher degree of SA than humans should its existence be valued higher than ours? Maybe not by us, but it sure will by them. So, it’s probably a good idea not to arm robots with trigger fingers. :fearful: