Would you object to being kept alive as a virtual "ghost" using AI?

Topic pompted by this story—

—in conjunction with another recent story, which I’ll mention further down.

The basic upshot:

  • It’s now feasible (and has been for a little while) to train a generative-AI model on the information for a specific person and create an interactive simulation of that individual. Obviously, the more information, the better.
  • One emerging use case for this tech is called a “grief bot” — when someone dies, you take all available material on that person and build a digital ghost you can interact with, as if they never died. The extent of the possible interactivity depends on how much work you’re willing to do.
  • If you don’t want your survivors to make a virtual resurrection, your legal options for enforcing such a prohibition are extremely limited. A celebrity whose image and presence are monetized as an asset can pass that asset to their estate, and the family can exercise control over it; but an ordinary person has no such advantages.

Instinctively, I think, it’s obvious that there is enormous potential for these “grief bots” to be psychologically unhealthy, interfering with the natural process of loss and acceptance. This is especially true if you consider how addictive the current generation of chatbots can be, and how intensely they can distort reality for some people. This article is paywalled, and I don’t have the ability to make a gift link, but if you’re able to read it, I highly recommend you do so. It’s not about grief bots specifically, but more broadly about how easily this tech can lead vulnerable people into a dead end of wholly manufactured beliefs. Prepare to be horrified.

Essentially, the AI companies are aware this kind of thing is happening, that some people are disappearing into delusion and outright madness, but they’re doing very little to curtail the problem and intervene. From a bottom-line perspective, captive users are good. So if they can hook grieving people with digital ghosts of their dearly departed, we should expect them to do so.

The legal issues, at the moment, are wide open. No doubt we’ll see volumes of caselaw and new legislation over the coming decade, probably triggered by entrepreneurial overreach that hits people’s gag reflex (imagine going to a funeral and seeing a screen sitting on the casket featuring a simulation of the deceased that will respond thankfully when you “say goodbye”), not to mention the predictable horror stories (countdown to the news item where a scam artist uses a personal simulation to convince a grieving widow that her husband is speaking to her from beyond the grave, and wants her to convert all her money to crypto).

I don’t intend this thread to be a general debate on whether or not this is a good idea. The topic has been well-plumbed in fiction, from Neuromancer to Black Mirror. The idea is not new, and the risks and pitfalls are not a secret. We nevertheless continue cheerfully building the latest Torment Nexus, so this is going to happen, whether we like it or not. I mean, if you want to talk about that, it’s fine, I guess; I can’t stop you and I’m not trying to police the discussion.

But, rather, my hopeful intention for the thread is to ask each of you, individually and personally, what you think of the prospect of being “kept alive” for your loved ones to interact with after you’re gone. Current and future legal issues aside, inevitable horrifying news items aside — how do you feel when you imagine your friends and family sitting in front of a computer, interacting with an AI version of you that exists solely for the purpose of allowing some version of your presence to persist after your death? And if you find this unpleasant, will you take the concrete steps of talking to your estate planner about your (currently very limited) options to attempt to prohibit this use of your life records after you’re gone?

No, people who are already delusional and outright mad are using AI. Blaming AI is like blaming Dan Rather for William Tager or David Letterman/Story Musgrave for Margaret May Ray. What, exactly, are AI companies supposed to do to stop schizophrenics from using their software?

I’m not quite sure of the larger implications here. Is there any way my likeness could be bought or otherwise obtained by, say, Sinclair Media, and used for purposes I’d find abhorrent, like advocating for conservative causes? And how good a reflection is this? If living me would advise anyone using my AI self to stop and move on, would the company prevent AI me from saying so?

IOW, if I knew that the possibilities would go beyond being a “grief bot,” I might be more or less inclined to take certain actions.

Too late to edit, so a couple of postscripts and amendments to the above:

  • First, we must stipulate that this is not “continuation of consciousness” technology. You, the deceased, are deceased. This is not an upload of your mind. You’re dead and you’re gone. This is purely a virtual simulation for your survivors to interact with.
  • Re the questions, do you disagree with my impression that this has the possibility to disrupt the grieving process in an unhealthy way? In other words, would you be okay with it?
  • If you want to take action, in addition to contacting your estate planner for your own situation, would you also contact your legislator?

This is discussed in the Ars Technica story, primarily as background to the central question of the article, to wit, your legal options for barring the creation of a virtual “ghost” of yourself after death. One of the cited examples is a church that takes the likeness of a dead woman and turns her into a meme about dancing happily or something, and the brick wall the woman’s grandchild runs into when they try to pursue legal action for misuse of her grandmother’s information.

So, yeah. In the current legal landscape (in the US, anyway), once you’re dead, it’s basically open season on your simulated presence.

Not correct. The NYTimes article makes very clear that these people were not already insane. They may have had previously unknown predilections and vulnerabilities, but they were normally functional before being led down the rabbit hole. Please try to read the article.

I read an article about it several days ago. I stand by my position.

Fine. Would you care to respond to the actual question?

Okay, so are these companies warranting that their AI acts like I would? If not, how is this different from them just slapping my likeness into whatever generative AI personality they want? That wouldn’t be resurrecting me in any sense that I can think of. If so, isn’t the core legal issue just use of my likeness?

The one about the virtual ghost? I couldn’t care less. I’d be 100% dead either way.

Cautionary tale:

Stranger

I specifically noted that the legal issues are wide open and poorly defined and will certainly be subject to debate and clarification. On the one hand, it’s probably true that a company which puts your head shot in its print ad and says “he enjoyed smoking our cigarettes!” and a company which puts a fully animated video simulation of you saying “vote for Clone Hitler, he’s got the plan for the future!” are both effectively committing the same act of misuse; but on the other hand, there’s probably an argument that the convincing verisimilitude of the latter increases the magnitude of the offense.

But that, to me, is a sidetrack from what I wanted to ask as my central question. How would you, personally, feel about the prospect of your children and grandchildren interacting with a simulated version of you that exists to soften the grief of your passing? That’s what’s interesting to me, not a speculative legal debate, which is why I asked the question I did. A static video of you endorsing a repugnant view is one thing; a fully interactive version of you is something else.

How is that different than them talking to me in their heads or at the foot of my grave and imagining what I might respond?

Okay, fine: I’d be mildly concerned because such a company would be committing fraud if they said or implied that interacting with this bot would be like interacting with me in any sense whatsoever, if only because, as I said, the company would almost certainly censor my opinions and beliefs, rendering such a bot useless for the stated purpose. They might as well have taken a bot trained on data with no connection to me and just put my face on it.

And thus, I don’t see any point in taking legal steps, because I don’t see a lot of novel ground here. Thus, I think the phrasing of the subject line of this thread (“kept alive as a virtual ghost”) is sensationalist.

I see that the NYT article is paywalled, so I tracked down the one I read before. This one probably isn’t paywalled (it isn’t for me)

The OP title is misleading - this is not ‘being kept alive’ in any way, shape or form, it’s just misappropriation of likeness. I’d be dead in the OP case and won’t care, but if it were done to someone I cared about, I’d fucking hate it.

I can think of a few ways it would be different - I’m not sure they matter, but to my mind they are significant differences:

Firstly, the outputs of a grief-bot can be published. There is a big difference between someone say posting on social media:
“I stood at my dad’s grave and it felt like he was there with me”
and
“Here’s something my dead dad said this morning, click to watch the video”

Nextly, the veracity of what the grief-bot might say is determined by the AI model running it - this, at least at the current level of technology, means it will contain hallucination and bullshit - it will contain opinions and statements that the deceased person would, perhaps quite purposely, never have made.
Of course the imagined talking-in-your-head at the graveside would likely be significantly shaped by the expectations of the person imagining it, but this, together with the hard-copy-available nature of the bot output, makes it a different phenomenon, I think.

OP has already clarified this

I think it would be a good idea for this aspect of the discussion (is it really you?) to either be shut down or moved to another thread, because it will derail this one.

With respect, you’re responding to the scenario where a company misappropriates your likeness against your will for its own promotional purposes, and that’s not the question I’m asking. I will assume that’s my fault, for not framing the issue and expressing the question clearly enough.

This is about your own family members — people who have an actual relationship with you, not a company which doesn’t — creating and using a simulation of you, solely for emotional reasons. Whether or not they’re using a third-party service to support this simulation is irrelevant; in the OP I simply assume that companies will offer this to family members who want to do it. I am not asking about the company stealing your presence. I am asking about your own family willingly choosing to persist your presence, and I’m asking about how you feel about that.

That, to me, is novel ground. I did stipulate that it has been addressed in fiction; I made a passing reference to Black Mirror, which Stranger then explicitly linked, and there are many other examples besides.

What’s novel is that these fictional scenarios are now technologically plausible. We must now consider, in reality, the scenario where we “live on” (quotes deliberate) in simulated form after our death. Which means it’s reasonable to ask how we feel about it.

That’s key here. I’m not asking a legal question. I’m asking an emotional question.

If you don’t find an emotional question interesting, that’s fine. But that is the question I’m asking.

I think I would prefer that my family did not create a replica of me after I am gone. I would prefer that they get on with their own lives and allow the memories of me to fade and not dominate their every day; the world will continue to change and my current views will likely continue to look more and more out of date.