Would you object to being kept alive as a virtual "ghost" using AI?

I think the thing that sets this apart from other, recorded memorials of a deceased person is that we’re talking about a thing that would be capable of appearing to express opinions on subjects about which the deceased person never spoke. I suppose its not impossible those things could happen with other ways in which people remember their departed loved ones, but with an AI bot, it just seems vastly more likely because those things never seem to just hold back on blurting out answers, regardless of the question and their true capability to answer it.

Of course when I am dead and gone, I will not care about my reputation being damaged, defamed or revised, because I will be dead, but this thread is (as far as I can tell) about what I think now, while I am still alive.

Presumably they knew while giving the interviews that this was how they’d be used?

The family would have to already have all of this information in order to get it into the bot. They wouldn’t be learning anything new.

If somebody wants to leave a recording of themselves answering such questions – that can be done right now; has been possible for years; and is done by some people, often at the urging of those making the recording.

This. Both opinions and supposed recollections. And those opinions may be drastically different from what the person would actually have thought; and those “recollections” may be drastically inaccurate.

Yes the Holocaust survivors consent.

No the family would not need to already know. Why would there be any presumption they are entering it? The whole concept is an AI process that gleans all available information and creates a virtual version.

And the family that is the target audience may be only in second grade when G’ma died. Using it to try learn about maybe even better understand the past when they are twenty or older.

We actually somewhere have recordings of my Bubbie being interviewed by one of my cousins about family history … and recipes. I’m not going to listen to the whole thing. But I could imagine being curious one day and asking to hear the story of how our family immigrated and established here. About the grandfather I never met. In her voice able to answer questions in an order other than how my cousin asked and followed up and with details from several sources? I’d know it is not her. And that the facts might not be all exactly factual. But that was how her memory worked too.

It is a much more accessible format.

I feel like this goes beyond “faulty memory”. If you were to ask my AI who my first date was with and what happened, it would have no idea. I personally might get a detail wrong or misremember something in a fanciful fashion but the AI is going to just straight up make something up. Except you’ll have no basis for knowing what is “not exactly all factual” and what’s complete bullshit the AI told you because it’s trying to meet your request with no information.

Thinking on it, an AI trained on available material from me would be pretty garbage for knowing anything about my youth or what it was like to grow up in my era. I just haven’t written much about it. Be great if future generations want to know my thoughts on the Bush administration though.

How is it going to possibly have accurate information about the person’s experiences and what they remember of their parents while growing up, if the family hasn’t got it? It’s going to make wild guesses, based quite possibly on published information by people who happen to have the same name, or even on fiction about people reminiscing. That’s also likely to be true of family history – it may or may not access the right records; and much family history isn’t going to have been recorded anywhere.

So, as the ai isn’t self-aware, and isn’t you being kept alive, but is a complex piece of media your family might interact with, let’s flip it around:

If someone offered you an AI model of your deceased mother, or father, or best friend, would you want it? Would you be tempted? Would you be disgusted? Why or why not?

Because i believe I’ll die dead, and won’t care what happens to my remains, physical or IP.

Only one way to find out for sure, though: we will ask your AI double after you die how it feels about itself.

Lol

(And words for discourse)

No. Not for a second. Yes. For the same reason I don’t want to eat a big plate of horseshit.

Nope. Besides the technical limitations, my mother isn’t a technical person at all and a computerized facsimile of her would feel like a mockery. Perhaps if your loved one was some techie person, the transition to talking to them via AI would feel more natural but that doesn’t apply in my cases.

Sure, but the question is how you’d feel about this happening to you. If the answer is “I don’t care”, that’s a fair answer but there isn’t much to discuss. In my case, I think it’s a lousy and flawed idea but it’s not going to keep me up at night and my angry wraith isn’t going to haunt anyone who tries it. It’s just a bad idea that I wouldn’t engage with (assist in training, etc) while alive.

Lol. Yeah, this. Also, I miss my departed friends. If I somehow stopped missing them, because that void was plugged up by interaction with an electronic bullshit-puppet, I feel like that would dishonour their memory.

I am in possession of some information. My oldest sister other. Written sources that neither of us have read other information. Data in various archives avaikable with a search other yet. Cross referencing scanned photos other.

I as an individual interacting with the avatar only know a small share of that. My kids less than that.

“Grandpa, did you ever feel conflicted about those men you killed in the war?”
Well, this reminds me of when my county tax ID number was 3-556793-4722-11N…

The avatar doesn’t have your information, or your sister’s. Even these days, it’s likely not to have all the photos that the family among them does, and it can’t tell from a photo what the person was thinking or saying when it was taken in any case. If the written sources are public it may have those, as well as that data in various archives – but it may also mix them up with information about other people, and with fictional information. At best it’s taking a wild guess as to what your parents would say.

Now a bot that hunts through assorted data sources and public written info and collates that information for you – that might be useful, provided it gives its cites so you can check whether the info’s actually about the person you’re trying to find out about, and provided that you do that. But that collation of information isn’t a representation of the person.

and terrifying.

Why wouldn’t it? It’s just a matter of us uploading it. We don’t even need to have read it ourselves. And that’s even these days.

While the op does posit that such is “now feasible” I read this more a hypothetical about such simulacrums being available:

In that world how would we feel about it?

My knee jerk, perhaps poisoned by the “ghost” description in the OP, is bleh. But on further thought I see use cases when the technology becomes adequate.

Does everybody (yes, I know some people do) really always upload everything? Including the photos taken in 1953?

Also true.

Of course not. But a family interested in creating such a simulacrum could, and would know that the quality of the product would relate to the volume and quality of the data fed in.

I also think you underestimate the amount of data many of us have created in texts, emails, and social media of various sorts. All at least theoretically harvestable.

Yes, but the current generation of bots is laughably bad at making use of it. As an experiment, I gave one of them my wife’s name and asked it to tell me about her, and it responded with a mix of garbled facts and absolute nonsense, including a confident statement about the name of her husband, which was not my name.

Which I recognize is an argument against the proposal on the basis of sheer counterfactual unreliability, which is entirely separate from the original question about the emotional healthiness of the concept. For me, that’s the more important question; I’d rather this not go down the usual rabbit hole of “ha ha, AI is dumb.” Even assuming an approximately accurate simulation with no hallucination, I don’t want to have anything to do with this use case.

If someone wanted, I imagine there could be value in a compilation of your “data” and allowing you to ask questions in natural language to access it. However, it should have strict guardrails where it just says “I don’t know” or “Not available” as opposed to just opining as a simulation of your thoughts and feelings. This probably wouldn’t be as useful as a grieving tool but then it probably wouldn’t be as potentially obstructive to the act of healing either.

I could imagine a “memorial” use if you gave an AI some artwork or recordings or poetry, etc and asked it to create something in that style. Take that trip to some location, feed your photos into it and ask it to generate a sketch in the deceased’s style that they never got to make while alive. Feed it some recordings of you singing and then ask it to generate a recording of “you” singing Happy Birthday to your grandchild or that special song you two used to sing in the car. Knowing it for what it is, I could see someone finding comfort in that. That’s a lot different from treating it as something to have a conversation about your day with.

Well, you’d train a bot specifically for that person using all the information available. I wouldn’t expect a general purpose LLM to know anything about me right now. Streamline a model and train it on my internet posting history from 1991-2025 (in addition to general knowledge, of course) and you’d have a far different thing. Not something I’d say should replace me beyond the grave but something a hell of a lot closer than a public model right now.