I don 't like it at all. Not because it’s my likeness, but because I feel it’s bad for them. Psychologically unhealthy. I’d feel the same if they were doing it to Grandma or a dead child instead of me.
Sure, but again, since the model could or would say things I’d never say, that would not be a “resurrection” of me in any meaningful way. It’d be the equivalent of putting my face on a Trump AI and calling it a day. So it completely loses the personal connection that seems to upset people more than usual.
I think how I’d feel about it would be annoyed; because they ought to know that isn’t me.
You want to try to talk to me after I’m dead? Go talk to a cat, or to a tree, or to a field. They’re not me either; but they’re a lot closer than a computer is. And a lot less likely to say something I wouldn’t say.
I’m not in the least convinced that any AI would do that; or would do/say anything else that I actually would.
Also this.
I’ll be dead. I, as an individual, will have gone back into the rest of the universe. My molecules and my works and my love are part of that universe. And lots of those parts need attention.
Go pat the living cat.
What if your relatives go to a medium to commune with you?
I don’t really see the point of the question.
It’s someone saying what you supposedly would be saying with (to anyone who believes in spirits and mediums) even more authority than an AI generated video
They don’t have more authority though. If someone chooses to believe the medium, that’s on them but it’s not because the medium has any actual basis. As mentioned earlier, you might as well talk to a cat. If a person wants to believe I’m speaking through the cat, that’s also on them and equally plausible as the medium channeling me.
I’m not going to worry about every increasingly extended edge case (“What if Mad Dr Necromind uses his Brain-o-Meter over your grave??”) nor is that a reason for me to be any more accommodating about an AI trained on me as a stand-in for my thoughts.
Before we descend into various what abouts, please would you be kind enough to say whether or not you are satisfied with my earlier explanations as to why I find the prospect of an AI grief-bot to be a different thing to the potential personal musings of a bereaved relative? You asked quite aggressively about this, then didn’t acknowledge my response.
That is: do you recognise there is at least a continuum of different things being discussed here, potentially running from undesirable through neutral to desirable? And that, although people might struggle to draw a bright line on that continuum, they’re perhaps more likely to have a feeling for which things are further toward the ‘undesirable’ end than others?
If everything is the same as everything else, so should all be treated the same as everything else, we have no common ground for discussion.
I find it difficult to imagine the MfM griefbot being healthy for my family. But it it was, I’d be ok with it. Life is for the living.
To anyone who believes in spirits and mediums, there’s no “someone saying what you supposedly would be saying” about it. It’s someone saying what you are saying, as if they’ve got you on the other end of a telephone.
The medium thing is really just asking “Would you be okay with someone taking advantage of your family’s grief and scamming them after you’re dead?” There’s no difference between someone saying they’re a psychic who can speak to my ghost or someone claiming they have an AI of me (which they don’t) or someone writing a fake diary of mine and passing it off. No, I wouldn’t be okay with that but then I also couldn’t do much to stop it from my hole in the dirt.
That wasn’t really what I meant.
I think somebody actually could get closer to me (or rather, the memory of me) by talking with a cat. (Presuming, of course, that they weren’t expecting the cat to answer in English with my voice. Part of the problem with a computer is that it could be made to do that.)
So I meant “you might better talk to a cat”; not “you might as well talk to a cat.”
Fair enough. Ranking spiritual medium below cat works also ![]()
I’d like to think I’d talk to my family about this as the technology became available. If they thought that they would want to have some virtual simulacrum of me to interact with for some period of time I’d want to understand why. What were they hoping having that available would accomplish for them?
How I’d feel about it would be based on that discussion.
They’d know it was a very rough and superficial simulation of me. Not dead dad talking to them and likely less reliable about how I’d actually have responded than their memory would guess. They are a fairly psychologically astute gang though and if they had reasons to have fake me hanging about, well I’d hear them out. The bias would be to let it be their choice. But my guess is none would want that.
The concern is their well being.
Different of course if the simulacrum has sentience. Any possibility of such. Even if it isn’t exactly my sentience.
I wonder if anyone was creeped out by photographs of dead relatives when that was a new technology.
It’s a big world so probably but there was already a long history of capturing likeness as painting or sculpture so photography was a variant on an existing theme. I don’t think there’s a strong existing precedent for “Your thoughts and personality as a computer AI”
Closest I can think of would be revisiting works you made such as diaries, artworks, etc. Not really the same ballpark though in my opinion.
But by the time griefbot simulacrum you is available there will be licensed simulacrum celebrities and even historical figures available. Heck there already are to some degree - the Holocaust museum has interactive versions of victims based on many hours of interviews.
And that may be a justification given by family - an interactive version of a grandparent able to answer question about family history, about their experiences, about their parents growing up, about perspectives from a past viewpoint. Why have just of famous people or vital witnesses? I never met either grandfather; it could be interesting to have a conversation with an avatar that was a reasonable approximation of either of them.
That’s interesting and closer to anything I could think of. Still, it relies on canned responses to give the illusion of natural conversation versus the sort of freedom (and potential for error) an AI would imply. Cool tech though.
Since the Dimensions in Testimony interviews are canned, there’s really nothing there that couldn’t be replicated with just watching the entire video interview the responses are culled from, reading a diary, etc. The visual experience may be different but there’s no new information to be gained simply by having your questions hit a virtual index and pull up a planned response. It’s neat and engaging but doesn’t hit at the same level of interaction as you’d get from an AI or utilizing one to avoid the pain of loss. That said, everyone copes differently and for someone who refuses to get rid of a closet full of clothing to avoid the pain of loss, the difference between 5,000 canned responses or infinite generated ones is perhaps academic.
I’m not fully convinced by the unhealthy argument. But even if it is, it’s their choice, not mine. I don’t get to tell people how to grieve, even if I’m the person they’re grieving.
You will get to if you’re an AI!