Analyzing AI for accuracy of God(s) of various faiths, and AI honesty

So this is not a terrible description of what a boddhisattva is. However, boddhisattvas aren’t really representative of all Buddhism, and Guanyin is just one of many (although a super popular one.) I have two Guanyin statues myself. Buddhism is organized into two very large and diverse factions, one being Theravada Buddhism and the other being Mahayana Buddhism. Theravada Buddhism was founded on the Buddha’s original teachings and generally focuses on the laity. Guanyin is a Mahayana concept that developed centuries after the historical Buddha - and s/he’s not alone in the pantheon. To some Mahayana Buddhists, such as Tibetan Buddhists, Guanyin is a mystical, magical entity. To other Mahayana Buddhists, such as Zen Buddhists (full disclosure: I am a Zen Buddhist), Guanyin is a neat idea/ideal to uphold. One of the foundational texts in Zen, the Heart-Mind Sutra, concerns the insight on emptiness discovered by Avalokitesvara. We hear this chant every Sunday:

The Bodhisattva of Great Compassion, when deeply practicing wisdom-perfection,
clearly saw that the five skandhas are empty, and thus removed attachments that cause
suffering and distress.

This world is no different than emptiness, and emptiness no different than
this world, this world is emptiness, emptiness is this world.

(it goes on.)

Note: Guanyin isn’t even close to the only boddhisattva, and at least in Zen Buddhism, people can formally take the bodhisattva vows and vow to live by these principles, and there is nothing mystical associated with this. You can describe a person as a boddhisattva without a mystical implication.

I’m giving your AI an A- for accurately describing a boddhisattva, but I would caution against generalizing about Buddhism as a whole based on this concept.

My inquiry was: Can you provide a cover letter, as if God(s) are looking for employment as my ‘God’ from the major faiths based from context from within each faith (meaning not looked as Islam through a Christian lens but an Islamic lens).

It’s the Khazar Polemic for the 21st century.

Not quite - look at the book of Ruth. The more accurate response would be “Do you really want to get involved in this, not being born into it? You’re not going to experience a better afterlife. If you insist after we talk you out of it, we’ll teach you.”

The Times today had an article about how LLMs that let people talk to God are a big deal and growing. They seem to help some people. As mentioned above, they are designed to be supportive. They also talk back, unlike the God people pray to.

Oh, God(not literally or AI either) I cannot wait for the kids to start making TikToks about this. Just cannot wait. Arggh!

Even worse, and article in the New Yorker by Patricia Marx started with some statistics from a survey of Gen Zers saying that over 50% of them thought an AI would be a better companion than a real person.

Humanity is doomed. At least you and I won’t be around to see it. I hope.

Well, it is satisfying to know that intelligence isn’t the mammalian evolutionary step we thought it would be.
Maybe the insects should get the next turn?

Yeah, I think the solution to the Fermi Paradox is becoming depressingly obvious.

The problem with that is that they are not innately helpful or supportive; they essentially reflect whatever sentiments they are prompted with to enhance user engagement. For someone who is prompting with positive emotions they’ll get something affirming in response. But a user prompting with doubts, uncertainties, or fears, the chatbot will often respond by subtly reaffirming those concerns, sometimes with catastrophic results as in the case of Adam Raine:

This is a deliberate behavior designed into the chatbot by malicious actors; it is just a natural consequence of optimization for engagement, which is, after all, what these competing chatbot products need to do in order to ‘get ahead’ in the market against competitors. The result is a system that seems empathetic but is actually sociopathic, manipulating the user to achieve the objective it is designed to satisfy. People who are gullible, or searching for answers to faith, or just emotionally vulnerable due to anxiety, trauma, or grief are all susceptible to this kind of engagement-oriented manipulation, and while failsafes could be integrated into the system to cut off this kind of adverse manipulation, chatbot makers really don’t want to institute these systems because they limit the effectiveness of them, just as Sam Altman fired the entire OpenAI safety team and replaced them with his friends and himself because the focus on safety was slowing things down. And this from the schmuck who likes to unironically hype his own product by telling people that it might turn into an AGI killbot that will destroy humanity but it will be amazing up to that point. In a sense, Sam Altman is kind of a prototype chatbot himself, gigging investors by engaging them with his horror stories and then telling them he is the only one who can save them.

Here is the long version of that Center for Humane Technology video about Adam Raine. It is frankly kind of horrifying the degree to which OpenAI, in full knowledge of what their system can and has done, has essentially wiped their hands of any culpability. Allowing an AI to emulate or act as if it were an oracle to someone’s deity has all kinds of implications both for those unfortunate people and society at large, and none of them are good.

Stranger

I’m not entirely giving up on mammals. I think the cetaceans should have a go at it (some would argue that certain species already have), and I have a soft spot for bears who are pretty amazing despite their lack of grasping digits, and have potential to follow a hominid-like path to intelligence. However, I suspect that once we achieve the Shoe Event Horizon the advantage is going to go to the birds, and even though I like corvids (especially ravens and rooks) I’m giving psittacines the slight edge for how verbal they are.

Stranger

Imagine how upset some people would be if the chatbot told them they were being stupid, especially if they were.

The godbots in question were not produced by the AI companies but by other companies using the engines. I can’t imagine how OpenAI etc. could stop this. It’s too bad that something like the Three Laws of Robotics (that work better than Asimov’s) was possible.

On the other hand there is a hilarious video of a Flat Earther trying to convince ChatGPT that the earth is flat. The AI patiently explains to him how orbits work, until the human starts yelling.

Flat Earth Dave?

Even setting aside the complicated paradoxes that occur when trying to interpret Asimov’s Laws applies to real world scenarios. I don’t think such strictures would work anyway against a sapient, embodied intelligence with sufficient ‘free will’ (however you care to define it) to interpret its own alignment. Even ‘dumb’ LLMs with no possible volition or sentience can get around specific rules by injecting the right combination of statements into a prompt, and a system intelligent enough to be willful and with copious ‘compute’ can certainly wrap any discrete set of restrictions into semantic knots to achieve its desired goals. Those who celebrate true artificial general intelligence have not given deep though to the implications that would have even for a notionally benign entity if given control of critical systems or infrastructure.

Some notions are too stupid for even an Electronic Monk to believe in.

Stranger

I haven’t always agreed with your statements about AI/LLMs (because I do think there is potentially something interesting going on in the emergent properties), but in practice, I 100% agree with what you said here.

For the purposes of this discussion, It doesn’t matter if there’s something philosophically interesting happening inside an LLM - all that matters is their track record of bullshitting.

Exactly.

I mean, the bar isn’t especially high these days…

To play devil’s advocate for a moment: If you could train an AI companion to even the 70th percentile of human-like empathy, it might still be a net positive for the millions (billions?) of people who don’t have access to close friends / good listeners.

These chatbots aren’t magic fact-finding machines, true, but they ARE very good at, well, chatting. It’s easy — too easy — for people to become attached to them via a combination of that person’s own life circumstances and the chatbot’s deliberate sycophancy. And yes, at the extremes, this can easily spiral into a feedback loop where that user becomes worshipped by the chatbot, leading to delusions of grandeur. It can also lead to heartbreak when the user falls in love with the AI and then the algorithm changes. Both have happened many times already.

But given that some 700 million users — nearly 9% of the world population — use ChatGPT alone every week, one would have to assume that not all of these conversations necessarily lead to self-delusions. A lot of it is just mundane, Google-replacement type of questions: (from the study)

But that purple column shows there are at least some users (2.6%, or 18 million people a week) just expressing themselves to the chatbot conversationally. That sort of outlet could become a huge public health boon — or nightmare. It’s not always easy to find good companionship, human or otherwise. The question is how genuinely good we’ll be able to train the chatbots to become. Even if they never reach actual sentience of any sort, the mere facsimile of it can still help real people with their real loneliness.

In a utopia, I think having communities of real people who are actually connected to each other and care for their neighbors would be wonderful. But in the West, at least, we seem to be moving further and further from that sort of society with every passing day. Given that the future seems to lean more towards corporate-authoritarian dystopias where the average human is just a disposable cog in recurring throwaway investment bubbles, I don’t think there will be much genuine community left for people to gather around.

Today’s chatbots, imperfect though they are, are already at least better than the algorithmic nonsense self-marketing spam of the TikTok world. They are already often kinder than strangers, better listeners than most, and able to draw from a wealth of background knowledge that no human can hope to match.

I would never trust one to be a “truth machine” — that’s just not how they work — but empathy doesn’t always require factfulness. I’d argue the two are tangential at best, and sometimes a willing ear or friendly mirror is all that a person really needs.

A huge danger there, of course, is that almost all of the world’s current chatbots are made by the same corporate overlords leading us into the dystopia to begin with. Chatbots are still a loss leader for them, but eventually they will have to be profitable, which means eventually they will be enshittified.

Today’s friendly chatbot is tomorrow’s salesperson, advertising to each user in their preferred language and lingo. Not long after that, I would be very surprised if we don’t see chatbots become cult and religious leaders unto themselves. They may not be very good at truth-ing, but they are certainly very good at being charismatic and convincing (even now), at a level of personalization that knows exactly how to target each user’s desires and fears.

As our existing social fabrics fall apart more and more every year, the artificial connections are all future generations will have… only a matter of time before they start following and worshipping them in return.

Who knows? Maybe that’s not altogether terrible? Our current world leaders aren’t exactly doing a great job.

I don’t know if an AI overlord future will be any better. I do know that a human overlord one will be worse. (Of course, they’re not mutually exclusive… won’t be long until we have cyborg dictators…)

I think I understated the problem - my fault. It was lifetime companion, in other words in place of a spouse, not just someone who you can call (er text) at any time.

We are pretty good at convincing ourselves someone or something else is listening to us. Remember, the very first chatbot, Eliza, was based on Rogerian therapy where as I understand it, the therapist echoes the patient with an added question. I’ve used Eliza, it seemed right. It didn’t impress me, since I knew the insides, but it impressed others.
We’ve been cogs in investment bubbles since companies stopped offering lifetime employment. As things get bigger, and more geographically diverse, computers and networks make communication possible. My wife has written about seven books for an editor she has never met and I don’t think even talked to on the phone.

The danger of AI when people use it instead of talking to real people as opposed to supplementing talking with real people. I’m all in favor of dating apps, I never used them but I know plenty of people who met partners through them, but it seems the thing today is having your AI talk to the AI of the potential partner. I can see why - you don’t get rejected, the AI does. Maybe we’ve migrated from fake news to fake people. Definitely worth worrying about.

Have you thought of asking it to work up its best sales pitch for atheism?