Topic pompted by this story—
—in conjunction with another recent story, which I’ll mention further down.
The basic upshot:
- It’s now feasible (and has been for a little while) to train a generative-AI model on the information for a specific person and create an interactive simulation of that individual. Obviously, the more information, the better.
- One emerging use case for this tech is called a “grief bot” — when someone dies, you take all available material on that person and build a digital ghost you can interact with, as if they never died. The extent of the possible interactivity depends on how much work you’re willing to do.
- If you don’t want your survivors to make a virtual resurrection, your legal options for enforcing such a prohibition are extremely limited. A celebrity whose image and presence are monetized as an asset can pass that asset to their estate, and the family can exercise control over it; but an ordinary person has no such advantages.
Instinctively, I think, it’s obvious that there is enormous potential for these “grief bots” to be psychologically unhealthy, interfering with the natural process of loss and acceptance. This is especially true if you consider how addictive the current generation of chatbots can be, and how intensely they can distort reality for some people. This article is paywalled, and I don’t have the ability to make a gift link, but if you’re able to read it, I highly recommend you do so. It’s not about grief bots specifically, but more broadly about how easily this tech can lead vulnerable people into a dead end of wholly manufactured beliefs. Prepare to be horrified.
Essentially, the AI companies are aware this kind of thing is happening, that some people are disappearing into delusion and outright madness, but they’re doing very little to curtail the problem and intervene. From a bottom-line perspective, captive users are good. So if they can hook grieving people with digital ghosts of their dearly departed, we should expect them to do so.
The legal issues, at the moment, are wide open. No doubt we’ll see volumes of caselaw and new legislation over the coming decade, probably triggered by entrepreneurial overreach that hits people’s gag reflex (imagine going to a funeral and seeing a screen sitting on the casket featuring a simulation of the deceased that will respond thankfully when you “say goodbye”), not to mention the predictable horror stories (countdown to the news item where a scam artist uses a personal simulation to convince a grieving widow that her husband is speaking to her from beyond the grave, and wants her to convert all her money to crypto).
I don’t intend this thread to be a general debate on whether or not this is a good idea. The topic has been well-plumbed in fiction, from Neuromancer to Black Mirror. The idea is not new, and the risks and pitfalls are not a secret. We nevertheless continue cheerfully building the latest Torment Nexus, so this is going to happen, whether we like it or not. I mean, if you want to talk about that, it’s fine, I guess; I can’t stop you and I’m not trying to police the discussion.
But, rather, my hopeful intention for the thread is to ask each of you, individually and personally, what you think of the prospect of being “kept alive” for your loved ones to interact with after you’re gone. Current and future legal issues aside, inevitable horrifying news items aside — how do you feel when you imagine your friends and family sitting in front of a computer, interacting with an AI version of you that exists solely for the purpose of allowing some version of your presence to persist after your death? And if you find this unpleasant, will you take the concrete steps of talking to your estate planner about your (currently very limited) options to attempt to prohibit this use of your life records after you’re gone?