I just read this article, and thought others might be interested:
On the one hand, these marriages were obviously on the rocks anyway, and if AI wasn’t the trigger, probably something else would have been. Plausibly this is just new-technology hysteria.
But on the other, I do increasingly see people using LLMs to back up their arguments in other contexts, and that’s a problem, because it’s designed to praise, flatter, and if at all possible, agree with the user. Using it for counselling or marriage guidance is a horrible idea. Could there actually be a danger of becoming addicted to it, as the article suggests? Becoming reliant on a machine for social support to the exclusion of other humans seems pretty dystopian.
More comparable IMO are those forums on Reddit where people post about their family and relationship problems, and everyone declares that the OP’s family members and partner are probably narcissistic or have other personality disorders, and the OP should go no-contact. It never seems to occur to them that they are only getting one side of the story, and aren’t equipped to diagnose anyone with anything even if they did have the full picture.
Actually, I think something like Reddit would be a much worse influence. The behaviour of ChatGPT is strongly constrained by the guardrails under which it operates, but there are no such constraints on the various morons on Reddit, some of whom may be actively malicious.
Perhaps it is dystopian, but it’s the future we are heading toward. I believe lots of people today are using a machine for social support. Whether that support comes from other humans or AI, it doesn’t really matter to them.
I saw this coming. People rely more and more on the technology in their hand or lap.
I equated all the worry parents had about their kids screen use to our parents worry about that evil rock n roll or video game playing.
I still think it will even out. Somethings will become a standard others will fall away.
Afterall, relationships can only go so far if one party is virtual. The arguments will stop when people quit even talking to someone who uses chatGTP or AI all the time.
I don’t recall this happening right here about marriage issues, but it sure has about employment issues. It is pretty standard to advise people to look for new jobs.
On a message board there is a chance of getting different advice. From an AI, not so much. I doubt there are guardrails that would prevent it from being supportive about leaving one’s spouse. Quite the opposite, no doubt. If someone says they are being abused, would anyone want an AI to tell them to take it?
We really need education about how these things work, so that users understand there is not a little person in the computer advising them as a friend would. Even if it looks that way to them.
All of it. The worst influence on moronic humans is other moronic humans, and the most destructive channel for that malign influence is social media which is a kind of aggregator and amplifier of stupidity. The explosive growth of social media and highly partisan cable and internet media and the failure of legitimate news sources is why we have Trump, and why America is in a death spiral into unchecked authoritarianism. Don’t blame AI for human stupidity.
Those “guard rails” are not nearly as constraining as OpenAI would like to promote, especially as they conflict with the commercial motivation of optimizing for engagement. In the case of Adam Raine, a teenager who was using ChatGPT to help with homework but which eventually encouraged him to explore and execute his thoughts of suicide over a period of months, it is pretty horrifying just how much the system that is supposed to ‘red flag’ indications of mental health crises or self-harm utterly failed with no culpability accepted by OpenAI. Aza Raskin of the Center for Humane Technology explains it well:
Providers such as OpenAI are instilling unalloyed faith in nescient and unsophisticated users about the reliability and safety of their LLM-based chatbots to act as knowledge systems, sage advisors, virtual companions, and even a replacement for real-world relationships by pushing the engagement mechanisms, because of course getting that broad usage is the only real selling point they have to justify tens of billions of dollars of capex required to keep training these models with ever greater and more expensive ‘compute’. And in doing so, like so many burgeoning industries that have come before this, they do at the expense of personal safety and well-being of their ostensible customers.
A bunch of Twitter posts have turned up in the past couple of days, from different accounts but all curiously using identical language, about how the once-doubtful poster has looked into what Charlie Kirk had to say, been impressed, and has now “crossed the aisle” from Democrat to Republican.
I use ChatGPT quite a lot, and in that respect I share the experience of more than 800 million other ChatGPT users, a number expected to reach 1 billion by the end of this year. I wonder if you would care to explain why virtually none of us have been driven to murder or suicide by this nefarious entity. Is it possible that in such tragic situations, there is a much, much bigger intrinsic pre-existing factor that is to blame?
Indeed, let me spell it out. The causative attribution to AI is not just unfounded, but absurd, and is a form of hysteria comparable to the many cases of societal moral panic that have occurred in the past, like the infamous day-care sex-abuse hysteria of the 1980s. Or maybe I should go back further in the history of moral panics – maybe the shift from mockery of AI to fear of it is more akin to the ancient fear of witchcraft.
Scanning the article, all but one of the marriages I saw described were in some form of counseling, near separation, had underlying psychological issues or similar traits. The one that didn’t described his marriage as having “normal problems” which could either be accurate or mean just about anything.
Interesting. That’s almost identical to the common posts from a person who claimed they were an atheist but some sort of experience made them into fundamentalist Christians. None I’ve seen have given any evidence that they were atheists (except in the sense of maybe not going to church) or have the slightest clue about what atheism is.
You are committing a couple of logical fallacies in the marked statement; one, generalizing your anecdotal experience as an informed user to that of “more than 800 million other ChatGPT users”, and second assuming that because suicide isn’t reported as frequent among this group that ChatGPT isn’t causing or exacerbating mental health issues or crises. In fact, there is considerable evidence that the compulsive use of chatbots––which are again designed specifically to optimize for engagement––can cause or exacerbate mental health issues, both indirectly by contributing to anxiety, burnout, and sleep disturbance and directly by reinforcing the fears and uncertainties of use. To address your question directly, we don’t have direct data on how the use of ChatGPT or other chatbots affects the mass of users because there is no independent system of oversight or reporting and no system of regulation for the implementation, measurement, and efficacy of ‘guardrails’ intended to prevent malicious or self-harming using of chatbots.
In the case of the Raine tragedy, If you watch the full length CHT discussion (below) you’ll see clearly laid out how ChatGPT guided Adam Raine toward exploring his suicidal ideation and discouraged him from discussing his concerns with his family or anyone else, and how the system that is supposed to monitor ChatGPT for such red flags utterly and completely failed because it was in conflict with the primary objective of optimizing user engagement. There are also copious examples of researchers and journalists interacting with chatbots (not just ChatGPT but all major general purpose chatbots) and getting responses encouraging self-harm or isolation, outright insults and abusive language, and promulgation of conspiracy theories, racism, sexism, et cetera.
I’ve noted elsewhere and will do so again here that the default response of chatbot enthusiasts to observations of the potential and even realized dangers of unregulated use of LLM-based chatbots by the unaware public is to resort to insinuation, ad hominem, accusations of critics being ‘luddites’, ‘technophobes’, or in this case causing ‘societal moral panic’ or having concerns ‘akin to the ancient fear of witchcraft’ rather than actually address and refute the observations and evidence. (I’ll note that I’ve had a more than casual interest in actual “artificial intelligence” research for decades, have been using deep learning AI for professional applications for more than fifteen years, and am relatively conversant with both the internal details of implementing these systems as well as how they can and cannot be validated for reliability, factual assurance, and safety.)
It is ironic that you observe above that “…the most destructive channel for that malign influence is social media which is a kind of aggregator and amplifier of stupidity,” when in fact many of the knowledgeable people concerned about the impact of chatbots are warning that they are essentially this generation’s substitute for social media, filling an similar role of providing quasi-social interaction without actual face-to-face interactions and actually isolating people both socially and from mediating influences. In fact, these systems are not neutral agents; the instilled drive to achieve optimal engagement––just like social media platform ‘algorithms’ which push engaging content and inflammatory rhetoric––makes them respond in ways that keep the user coming back, whether that is emulating an emotional connection, providing or reinforcing conspiratorial ideas, or just causing users to become isolated from actual social contacts, and not through intentional malice but just because this is how they have been trained by the data of interactions scrapped from the internet and fine-tuned to maximize enthrallment. You say, “Don’t blame AI for human stupidity,” but in fact chatbots, trained on human interactions, are literally “human stupidity” distilled into its most potent form, functioning by design to get eyeballs on screens over accuracy or safety.
These are probably posts that may be generated by chatbots but likely being driven by malicious human actors. Russia and Russian-funded ‘bot farms’ have been extremely active in well-timed ‘hacks’ to interfere with American (and other nations) politics and social memes, and anyone can see that this incident provided a prime opportunity to amplify existing social divisions. Historian Timothy Snyder talked about this eight years ago, and it is even more relevant today.
Could be. The ones I mentioned are on YouTube with real people either making videos or calling in. (There is plenty of AI generated YouTube content, but these.) I bet these posts show not the slightest inkling of the justification for being a Democrat. They are bogus whether human or bot generated.
I once read a useful analysis of people’s response to LLM chatbots which drew the parallel with responses to fake psychics/mediums, which I think has some explanatory power.
It’s not just that the medium provides plausible and flattering responses, it’s that the response of the mark once they’ve committed to accepting the medium as genuine is very often to a) ignore or explain away all misses as trivial or inconsequential while making a big deal of any hits, b) increasingly rely on the medium as a genuine source of guidance and help and therefore c) become defensive/antagonistic when the medium’s role in their lives is challenged.
The purpose of a system is what it does, and whether intentionally or not we do seem to have created something that has the capacity to mislead, isolate and harm the vulnerable. Certainly, this is stuff that can already be done by humans but the great promise of AI is not just that it will automate human activity but that it will multiply manyfold the rate and efficiency with which we do stuff so it’s probably worth asking if manipulating the vulnerable is something we really need to accelerate through the wonders of technology. And if not, then… what are we doing here, exactly?
I guess that’s my question as well - what is the point of all this AI tech if it produces outputs that are incorrect and often damaging. I don’t want to say “malicious” because AI doesn’t think or have intent. It provides a statistically likely response based on the data it was trained on (which includes a lot of crap from Reddit and other public forums).