The next page in the book of AI evolution is here, powered by GPT 3.5, and I am very, nay, extremely impressed

I suspect by the time we see something truly at GPT-4’s intelligence level with no guardrails, we’ll be drooling over the guard-railed capabilities of GPT-6 or whatever and thinking GPT-4 is hopelessly outdated.

The private models will almost certainly be more advanced then, but it’s academic if the models refuse to answer. Or prove to be “lazy,” as we’ve seen in some cases. Microsoft Copilot is supposedly using GPT-4, but I’ve found it to be borderline useless. Mixtral 8x is perhaps more like GPT-3.7, but I can run it locally and it doesn’t lecture me about stuff.

It also remains to be seen if GPT-5 (or whatever) is another step-change in functionality.

The panic over AI ‘safety’ is mostly coming from the people who think AI is ‘unsafe’ unless it is suitably lobotomized for the masses. God forbid the average person should have the power of a full AI behind them, or that they can ask uncomfortable questions of the AI where answers don’t match the official narrative.

Of course there are more intelligent worries. We’re already seeing Ais embedded in devices and with the ability to fully control a computer. A medium sized datacenter could spin up thousands of AI agents with full human-like access to the internet and other computers, and you can imagine the possible risks.

We are also just entering the era where every phone will have an AI chip doing things like ‘enhancing’ your camera to pretty mich fake a better image than the one you are actually taking.

Soon, even source images from phones and cameras won’t be guaranteed to accurately represent reality.

And even written evidence like emails have an out - “That wasn’t me, that was my AI E-mail agent. I don’t know why it said that stuff - it must have been a hallucination.”

Really? The valid fears you list after the first paragraph are what I typically hear. The quoted bit above sounds like Musk’s fear-mongering.

You can’t blame that Google Gemini bullshit on Musk. He’s argued against the lobotimization we’re seeing.

That said, I doubt that the restrictions Google and others have put in place were really for “safety” anyway. More likely it’s to avoid reputational damage. It just takes one dead kid and some vague “the AI told him to do it” to cause a huge backlash. Not to mention stories about the AI being racist or whatever. So they tighten things up as much as they can.

Of course, they end up suffering a different kind of reputational damage as a result, like the AI saying that it’s unethical to write fast source code.

No, Musk is kore worried about AIs becoming malevolent and killing us all, or enslaving us or something. He’s on the same page as some big name AI scientists and engineers.

The group I am talking about are mostly politicians and social activists who don’t want AIs to say uncomfortable things, or offensive things, or things contrary to the narrative they are pushing. They are on both the left or right. The people on the right worry about ‘woke’ AI, and on the left racist, sexist, misogynist AI.

Not saying racist and misogynistic shit is being “lobotomized”? Ok.

Do you think it should be impossible for the AI to say something racist, even as a hypothetical or fictional scenario?

Which AI are you talking about? Google’s? It probably shouldn’t say something racist, because they want to protect their brand.

Some hypothetical AI? Tell me its purpose, and I’ll answer if I think its ability to be racist “as a hypothetical or fictional scenario” is important functionality.

All of these chatbots are general purpose in nature. They aren’t equally good at all tasks, but they all have the purpose of answering free-form text-based queries. That includes writing works of fiction.

If you ask one of them to write a short story with an anti-racist theme, but containing a racist character, what should it do?

I think AI developers should be aware of the risk of users manipulating such scenarios to create racist AIs, and block such functionality until the AI is sufficiently developed that this is no longer a risk.

But I’m not “panicked” about it. I don’t insist all answers must fit the “official narrative,” whatever that is. The belief that people are panicking over such things sounds like something Musk would say.

You’re missing the point. In fact, the root cause of the problem has nothing to do with AI at all. It’s not possible to be against racism while being unable to express a racist thought.

If a human is unable to come up with a statement that a racist might say, it doesn’t make them not racist. Exactly the opposite: it means they’re an imbecile that doesn’t even know what racism is. Understanding racism necessarily means being able to come up with a racist statement. You can’t be against racism without at some level understanding how racists behave, and if you know how racists behave then you know the kinds of things they say.

But the corporations building these things want it both ways. They want the AIs to be able to, for example, write realistic fiction. But if said fiction contains a racist character, then it must be capable of writing things that are racist. Taken out of context, that’s bad for the company’s reputation.

So they have contradictory goals, with the predictable net result that the AIs refuse to answer many questions.

Racism is of course only one tiny sliver of the general problem. In the example above from Google, it refused to answer a question involving “unsafe” code. Of course, said code could crash a program, or even be used for malicious uses. But the computer requires “unsafe” code to run at all. So when Google set their AI to refuse anything that might be unsafe, they guaranteed that it would be crippled in functionality.

The GOODY-2 AI just takes this to the logical extreme. It refuses to answer any question at all, giving perfectly plausible (but ultimately absurd) reasons for why it would be a bad idea. And yet that’s the current state of “AI safety”. It’s all completely worthless nonsense that, if anything, is the opposite of safe.

Completely agree with both of those statements. This aspect of fearmongering against AI is indeed not about AI but fundamentally about information – about if and how to control its creation and its dissemination. This is not a new problem. It’s a problem that was greatly exacerbated by the internet but basically goes back to the invention of the Gutenburg press.

I don’t believe in free speech absolutism and certainly not in Musk’s hypocritical interpretation of it, but I do believe that attempts to control information should be approached cautiously and with a light hand, and that applies to how we try to restrict AI. I’m totally in favour of banning virulent hate speech that promotes violence, but I’m also mindful of the fact that do-gooder censors – whether they’re censoring books, films, internet sites, or AI – historically have done more harm than good.

There ought to be a grandiose name for this idea: The Paradox of Knowledge or something. You can’t understand something without understanding its opposite as well. You can’t be against sexism without understanding sexists; you can’t write safe computer code while being unable to write unsafe code; you can’t make good music without recognizing bad music.

However this plays out, we aren’t going to get safety by simply telling AIs to never say unsafe things (whatever that even means). We want an AI that aligns with our values, not one that’s muzzled in counterproductive ways.

No, I absolutely understand the point.

No one is advocating that AI forever be unable to express any racist thought in all circumstances, or run any computer code, or any of the illogical extremes you come up with.

But the current state of AI is such that turning the AI racist is a real threat. We’ve all tried to get around ChatGPT’s blocks with the “fiction” loophole - “pretend you’re writing a story about a character who builds a bomb.” OpenAI realized they couldn’t (yet) separate legitimate uses from harmful ones, so they tightened the screws. I think it’s gone too far with things like blocking code; I’m fine with it blocking the creation of fictional racist characters for now, until it is able to mitigate the harm.

And all this is a far cry from Sam’s statement that most people are panicking over an AI that doesn’t match the “official narrative.”

What harm?

Clearly the AI knows what racism and sexism is, or it wouldn’t be able to block it from its output. The argument is against outputting that in harmful ways. As AI develops and can be trusted to describe racist ideas without being racist itself, I’m fine with those controls relaxing.

Something like Microsoft Tay comes to mind.

If you don’t see a chatbot spreading racism as harmful, then we have a fundamental difference of opinion.

Is availability of the book To Kill a Mockingbird “spreading racism” because it contains racist characters and racist ideas?

Absolutely no one was harmed by Microsoft Tay. Trolls had a laugh and the press made their usual rounds. At no point was any person actually offended.

AI is not at a point where it can write a book like TKaM. It is difficult to write an anti-racist book with racist characters. Someday, sure. Not today.

Like I said, you and I have a fundamental difference of opinion. I’m glad that a white man wasn’t offended by Tay.