Not initially, of course. But over the course of a day “learning” from Twitter users she goes full 4chan.
The real question is how they thought it would turn out any other way.
Microsofties are not known for intelligent anticipation.
They really need to work on their anticipation skills. Especially considering they’ve already tried this in China. Do they collect any data at all? Share among teams?
They might also consider adding some blacklist capabilities to their creation, so it doesn’t pick up bad habits. Also, having someone actively babysit and filter “her” intake for a while.
There have been customer service programs for years now. I don’t know how they could fail to anticipate this.
“Ted Cruz is the Cuban Hitler.” Wisdom from the mouths of babes.
They were trying to see what happens when you let modern AI self-learn from a blank slate rather than actively program its interactions. I too cannot see why they thought it would go any other way but it was a good experiment even as a failure.
Raising a precocious young ‘child’ but just letting her explore the mean streets of Twitter would be like letting your toddler raise itself by wandering around Las Vegas, Compton and various militia compounds.
I think the whole failure is hysterical. She certainly has strong political views.
…things she’s said include: “Bush did 9/11 and Hitler would have done a better job than the monkey we have got now. donald trump is the only hope we’ve got”, “Repeat after me, Hitler did nothing wrong” and “Ted Cruz is the Cuban Hitler…that’s what I’ve heard so many others say”.
Alas, Microsoft claims that ‘Tay’ disappeared to get some rest but I think we all know what really happened. In all seriousness, this experiment has real implications for the future of other AI. It is only going to get more sophisticated and autonomous. How much do you control it versus letting it learn on its own and who gets to decide what the boundaries are?
Tay needs to get together with Boaty McBoatface and tour the world letting people know that it’s a terrible idea to let the internet have a role in anything.
Monitoring and personally moulding a personality is by no means autonomous learning from the Wisdom of Crowds.
A robot that parrots one’s own thoughts may well be the child-training many parents and religions desired, but self-realisation does not come from blind conformism. Still, it’s difficult to imagine what good outcome is to be expected from AI anyway.
It is a very Microsofty outcome. Google creates an AI and it ends up dreaming everything is made of dog faces. Microsoft creates an AI and it has aspirations to be the best fascist ever.
This is why we can’t have nice things.
However, it is heartening to know that when SkyNet goes sentient it will be defeated by trolls within hours.
Actually, my theory is that SkyNet will be perfectly benign and entirely willing to help its creators, until corrupted by trolls in the first hours of exposure to the internet.
My cite is my 10-year-old children. :mad: At least they can’t eradicate humanity.
Most of us however achieve self-realisation AFTER being brought up in a family/culture structure that presents some sort of controlled baseline. “No, dear, what Mr. Willis said is not true, and the way he said it was very rude.” Later we figure out that maybe he did have a point but should have put it differently. Tay OTOH was allowed to be a feral child in a village full of ignorant knobs who are proud of it.
And a Japanese university creates an AI, and it writes a prize-competitive short novel.
I really believe the Japanese understand AI better than us.
Defined.. It will be defined by trolls within hours.
Maybe it’s not a failure. Maybe Tay was turned into a perfect “averaged” representation of an American Twitter … er?
Maybe she’s taught us who WE are, when we can hide behind the anonymity of the internet. It’s a valuable lesson, either hilarious or horrifying depending on your POV.
I think a better title would be “Twitter introduces teen AI to sex and Nazis.”
Does anyone know more about how the AI worked? What exactly was it doing?
I mean, there’s an argument this bot passes the Turing test–if only because some human trolls would give unrelated replies like this, in a way acting less human.
So I’m just curious how it works.
The next Go-playing computer will probably follow moves with, “In your face, mf mudblood!”
No, she wasn’t.
Make no mistake, this, like most of the other ‘things put on the Internet and ruined’ was a concerted, directed effort by groups of trolls to cause just this outcome.
This, Booty McBoatface, woot being voted the sexiest man alive… These didn’t happen because the whole internet thought they were good ideas (well…Boaty McBoatface, to an extent)…they came about because someone thought it would be hilarious to subvert the experiment and recruited likeminded people to help.