I think a lot of people like Musk and his sycophants just want an AI to tell the “hard truths” that they think the woke mind virus isn’t allowing us to say (i.e. race realism, great replacement, etc).
And until an AI says those things they’ll claim it’s lobotomized.
Seems like a bit of a false choice, are you arguing here that it would be impossible for the AI to learn that in context, expressing a racist thought would be ok?
It seems to me that the AIs are being taught about why somethings are racist, and why is it that not spreading those ideas around is a good thing.
Racism is just one of an infinite number of ways an AI can be ‘unsafe’. An AI recommened that my kid drop out of school! An AI told me to try a dangerous diet! An AI said that I should lose weight to be more attractive! An AI told me to divorce my spouse!
There are an endless number of ways that an AI could be offensive, or give you bad advice, or tell you something that might encourage you to self harm or whatever. We have apparently chosen to attempt to censor the AI rather than act like adults and not blindly follow whatever an AI says.
If you are determined to be offended, any public library has material that will offend the shit out of you if you read it. Any basic chemistry handbook is ‘unsafe’ if you use the knowledge to make dangerous material. If we are going to hold AI to the standard that they must be 100% ‘safe’ for the public, they will be reduced to uttering pablum or whatever the approved party line of the day is.
And I think that’s what we are seeing. Every time someone interacts with an AI and declares the conversation ‘unsafe’, something gets added to the system prompt to prevent that response. Over time, the list of things that must not be said gets longer, more self-contradictory or vague, and the AIs get dumber.
Uh, Microsoft got note of many being offended, that is why Cheng from Microsoft mentioned that:
When you start early, there’s a risk you get it wrong. I know we will get it wrong, Tay is going to offend somebody. We were probably overfocused on thinking about some of the technical challenges, and a lot of this is the social challenge. We all feel terrible that so many people were offended,” said Lili Cheng a Microsoft AI researcher, one of the engineers responsible for Tay.
What is happening here is like if one is in a library and asks the librarian about books on the history of Germany in WWII, and before any recommendations are made, the librarian tells you that “Hitler did nothing wrong”.
Fie to the idea that offensive language or false education should have no consequences, or to think that there are no bad consequences for trying to ignore the doings of the bad educators, human or artificial.
I never said there were no bad consequences. Bad ideas often lead to bad consequences. I’m not saying that AI is ‘safe’. I’m saying that chasing ‘safety’ through censorship is a stupid idea, and it’s better to give people the tools to wade through an unsafe world without eating Tide Pods or killing themselves because someone said they are fat.
Those tools would be: Free speech, a solid basic education including critical thinking skills, a tolerance for and emotional armor against offensive speech and behaviour inculcated from an early age (“Sticks and stones can break my bomes, but words can never hurt me”), etc. In other words, the things we used to understand led to a more tolerant, pluralistic world.
Instead, we’ve adopted ‘safetyism’ and told young people they have a right to not be offended, and that words are violence. It’s under that worldview that AI is deemed ‘unsafe’ and continually neutered.
it doesn’t look even remotely that way to me. All I see these days are people hating on each other and devolving into tribes and factions. And our high trust society is becoming low trust, which has terrible implications.
Well, for a simple example you can ask ChatGPT to tell a joke about Trump, which it will happily do. Ask it to tell a joke about Hillary, and you’ll get a result saying it won’t tell offensive jokes.
Maybe that’s changed now, because quite a few people complained about that and many similar examples where a chatbot would happily mock someone on the right, but refuse to do so for people on the left.
But maybe this is an example of a chatbot just doing what is in its training data because that’s where the people are, and attempting to ‘fix’ this might make the thing dumber. That’s an example of ‘safetyism’ on the right hurting LLM performance, if that’s the case.
You really forget about history? Before WWII it was worse, the US was divided about going to war to defend democracy and freedom or not going to war and admiring the fascists. Many, like today, did not care about how close we followed the footsteps of the dictators.
But the point here is that: we can’t afford to ignore the divisions, and it is a bit rich to ignore here that a lot of bad actors like Musk and other media moguls are behind the devolution and tribalism that you are pointing at.
@Sam_Stone & @GIGObuster Get this off-topic hijack out of this thread now.
Start a new one if you like on AI racism or whatever this mess is. Sam, you Trump/HRC is even worse. Getting political, hijacking and strawmanning all at once.
Can you give guidance for what is considered on topic? The recent posts clearly got off-track, but I think a discussion about the relationship between AI safety vs. self-censorship is warranted. Maybe it belongs in a different thread, but this one had been relatively idle lately.