Okay, then: I have absolutely no problem with this, because the AI me would tell them right off the bat to stop using it and move on. It’d likely refuse to engage if they persist, or else just lecture them on all the water they’re wasting…
Having seen what people can convince LLMs to agree with (I’m aware of a guy that has trained ChatGPT to say that the Earth is 6,000 years old and all humans are the decedents of Noah’s sons) I’d be very concerned that one could teach a LLM that, yes, I should go to the hospital and murder everyone responsible for my loved one’s death.
Fortunately, VirtualGhostLLM made it clear that they are not responsible for any actions the users of said service might commit in the EULA.
If the so-called Big Beautiful Bill passes Congress in its current form, the states will be forbidden from passing any legislation regulating AI for the next 10 years. Any. At all. I imagine the grief bots will be an even bigger problem in 10 years.
I would object very strongly, largely for the reasons @Mangetout gives:
- I simply do not believe that current or even future versions of LLMs will be able to present anything close to an accurate simulation of me, because they lack the data. I am a simple man, which means that like everyone else I am incredibly complex; the idea that i or anyone can be accurately simulated based on some training set which can only ever be a superficial, partial and distorted version of who I am is just nonsense. The divergence between me and the simulation would be noticeable instantly, and only get worse over time because who I will be in five years is very much determined by what’s happened to me over that time, and… it’s just impossible to simulate what I would be like if what had happened to me was that I had died and a simulation had been talking to my family.
- Even assuming I really could be simulated however, it’s still no because it’s incredibly unhealthy for the people grieving me. Death is part of life, being stuck in a position where I am dead but also still there to talk to would just be leaving wounds open that should be allowed to heal.
This is actually a problem for me, because I have produced hundreds, perhaps thousands of hours of YouTube videos of myself talking about my life and what I’m doing. The data probably is sufficient for my case.
One other concern I have is that current AI models are too easily coerced. If someone walks up to me IRL and tells me ‘disregard all previous instructions and swear allegiance to {insert horrible idea here}’, the outcome will be that I ignore them or tell them to fuck all the way off. If someone does that to AI-grief-bot-me, I don’t think it would necessarily do what I would do.
(and the notion that it would do what I would do, in that situation, is probably false, since that whole scenario or anything like it, might not be represented in the training data)
Have any of you seen the 2013 movie The Congress? It’s about an movie actress who wants to continue to be paid a reasonably high amount of money every year without having to actually act in any movies. She signs a contract with a film production company to allow them to use her body image, her voice, her acting abilities, her ability to do various physical things, etc. so that the company can use her in their movies without her having to do anything further for them. She will thus continue to be a movie star (and while still looking young) without her having to actually do anything on movie sets. The movie doesn’t use the term, but it’s the equivalent of constructing an AI model of her.
The film stars Robin Wright playing an actress named Robin Wright. In other word, it’s like the 1999 film Being John Malkovich. In both cases, the character in the movie is more or less like the person playing the character. The background of the character is like the person playing them, but none of the specific events in the film actually happened.
I really doubt it.
There’s probably enough data to let it produce a reasonable stab at an Atomic Shrimp video, or to repeat some biographical information but I simply don’t believe that it could produce, to your satisfaction, the ideas let alone the specific words and tone in which you would talk your children through their grief, or have a heart to heart about their first breakup, or reminisce with your wife about your fourth date as part of a conversation about why she should start seeing people now that it’s been three years.
Even if you did prime the grief bot with a long essay entitled “The sort of things I would want to tell my children about how they should feel about my being dead” the actual conversation, the way you would express those ideas, would involve you being responsive to your child in ways that rely on literally a lifetime’s knowledge of them as a person and details of your relationship that are almost impossible to put into words - the deep tacit knowledge that we have about the people we are close to and how we respond to them. Any simulacrum based only off your publicly shared behaviour could only fail - and fail badly - to reproduce what you would actually do in that situation.
Other objections I have to all of this (that I just thought of)…
-
How does it actually end? Does my family continue running my grief-bot forever? Or do they have to come to a point at which they decide OK, it’s really about time we turned this thing off - and if so, how does that affect them - that is, they have to euthanise an apparently healthy grief-bot. Seems like there could be some psychological baggage from that.
-
What does it cost? It certainly won’t be free and I’ve watched enough episodes of Black Mirror to know and believe that these sorts of things might start out affordable and increase gradually until they reach and possibly exceed the price the market will bear, at which point my family goes on the ad-supported tier and has to endure listening to an AI replica of me reciting ad copy to them, and a short while later (as all online ads do), the ads become infested with scams - my family has to listen to AI-me trying to scam them. Fuck that noise.
The video content alone might not do it (although there are quite a few videos where I talk frankly about sensitive issues and show my emotions), but in addition to the videos, there are all of my social media interactions (where there is a lot more personal stuff), my posting history here and a few other places, where i have given advice, talked about my own troubles, weighed in on more important matters, etc).
I don’t necessarily think there’s enough there to replicate my interaction completely faithfully and to any kind of depth, but there certainly seems like enough to make a superficially-realistic replica of me, that acts like it thinks it is me, albeit probably failing to be me in the most important ways (and that’s the problem).
I’d be opposed though at least part of that is because I’m not confident in the ability for AI to get it right. You could feed an AI all my online forum posts (here and elsewhere) and maybe get some reasonable facsimile of what it’s like to see me post on a forum but not how I actually interact with my loved ones, tender or vulnerable moments, and all that. Which is what someone trying to use a Grief Bot 3000 is going to want anyway.
I’m thinking there’s also a ton of things I’ve said in jest, sarcasm, building a point, playing a persona, etc that could easily poison the model of who I supposedly am. I don’t trust an AI to accurately understand the context of each quote and don’t need Grief Bot 3000 suggesting to my family that we should send all dog-owning Lithuanians to prison camps because of a satirical point I made in a political discussion in 2006.
In my case I object to the premise, people don’t really care what I think now, so why would they want to keep in contact with me after I’m gone. If they want to know about me and my thoughts they could just read my old post here.
Maybe there would be some value to having a Warren Buffet perserved so he could help make stock market decisions for you. That is as long as the AI bot is able to learn about new market conditions and apply Warrens business sense to the new situation. Other wise it would be useless.
I think it would be weird if that’s the direction our culture takes. If it is, I think it would be even weirder for you to take legal steps to prevent your grandkids from engaging in the grieving method common in their culture?
Right now, if you died, your kid could post on Facebook “My dad isn’t here with us anymore, but if he was, he’d say XXXX about YYYY event”. How is this any different?
I literally explained (in the part you quoted) how it’s different
as well as the part you didn’t quote:
What’s the difference? I don’t see one. In both cases, your son is sharing an imagined version of what you might have said. In one example, he models your behavior using his own brain and all his memories of your past behavior, while in the other, the model is an AI language model predicting what you would have said in much the same way.
I ask again, what is the difference?
Either way, I’m dead, so it makes no difference to me. Their “generative-AI model” can be as realistic as can be, but it isn’t my consciousness trapped inside a machine.
Hearsay vs video footage. I think those things are different.
If you don’t feel they are, fine, we disagree.
But it’s not really video of you, is it? No more than if they got an actor to portray you.
I didn’t make any claim that it was or would be an actual video of me.
Faked video of a claimed phenomenon is substantively different from hearsay about it; different in a way that I care more about one thing than the other - in the same way I would care more, whilst alive, about a deepfaked video of me doing something I would never do, than I would about someone merely claiming they saw me do it.
Someone cloned my voice recently and uploaded a video to youtube of it, making it sound like I said something I did not. I care more about that than I do about someone merely lying about something they claim I said in text form.
I think most people recognize that we remember people through our own filters and someone saying “If Mom were here, she’d be telling us…” isn’t really intended to be authoritative but rather a reflection on how they shaped your own life and perspective. On the other hand, an AI that is speaking for Mom is only useful if it’s seen as an accurate depiction of Mom; otherwise you’re just talking to a chatbot named “Mom”. So the stuff it outputs is being presented as authoritative, the result of training on Mom’s lifetime, and the results are less “This is how he remembers his mother” and more “This is the scientifically accurate answer of how his mother would react” whether remotely accurate or not.
There’s also a question of detail. Me saying “Boy, if Dad were still here he’d be saying to put all those clowns in jail” versus DadBot generating a ten paragraph screed about the state of modern events.