Can AI create “funny stuff” ?

Inspired by this thread : What makes things funny?https://boards.straightdope.com/sdmb/showthread.php?t=878180

From my understanding of AI and machine learning, they are getting smarter day by day.

Are there existing AI implementations that can “write” funny stuff ? (Like jokes, epithets, …)

None that I’m aware of yet.

There is the Joking Computer: Humour Studies

It is a software that creates jokes according to a rigid format and some rules about what things are similar and/or sound similar. Sample from the “best of” page:

What do you get when you cross a frog with a road? A main toad.

Is that a joke? Yes. Is it funny? Depends.

Is that software an AI? Yes, in the sense that it generated an output that was derived from its input in a way that wasn’t predetermined. The program “learned” about the connections frog/toad and road/main later on. If you let the program generate a new joke for you (which you can do on the website), it’ll state what connections it used, and you can “teach” it about incorrect connections.

Is this a “smart” or “intelligent” program? Not in the way I use these words, although I can’t give you a rigorous definition for either.

I’m genuinely curious whether the Joking Computer meets your criteria, and why (not).

An article at Wired (paywalled) about another system inventing puns at Stanford

Yes and no.

With generative adversarial networks, it’s possible to make an AI that sort of parrots and mixes together existing jokes in new ways.

However, there’s no behind the scenes model for humor. It is possible with present day methods to make an AI that genuinely understands environments where a model is possible, such as driving or manipulation of physical objects.

This is because you can construct a simulator and also store the data about the state of the environment in a very straightforward way. You can also accurately model success - a successful drive involves “a minimized risk of a crash while reaching the destination in a timely manner”. This implies a *heuristic *- a straightforward mathematical formula that computes a quantitative metric for measuring success.

For example, you could make the driving AI choose from a library of valid maneuvers it has seen other human drivers do in the next short segment of the driving task, so long as the risk calculated for each maneuver is below some threshold of risk. It would then choose the maneuver that makes the most progress toward’s it’s destination.

At present, though, we can’t build a simulator of the human mind’s perception of humor, so we thus can’t train an AI to generate humorous content in an effective way.

I’d have to think that puns are the easiest things for an ML system to generate, since all you really need is a language model to tell you which words sound similar and which mean similar things, with no other context required.

A system that stays on top of and riffs off current events, while not impossible, would require a larger model than is typical.

I’m an AI researcher and humour happens to be my main research area. HeiLo ninja’d me with his link to The Joking Computer (which is largely the work of my colleague Graeme Ritchie), one of the best examples of computational joke generation. IIRC, this was created for a computer-mediated communication project where the goal was to provide mainstreamed kids with speech disabilities (due to cerebral palsy, etc) the ability to produce humour on the fly for their non-disabled classmates. And from what I’ve heard, it was a resounding success—these wheelchair-bound kids, who had largely been ignored by the other schoolchildren, were suddenly the most popular kids on the playground because they had a computer that helped them tell funny, contextually relevant jokes.

The Joking Computer is definitely AI, though it does not use machine learning. And in fact I do not believe that machine learning is the right tool for the job when it comes to the computational generation and interpretation of humour. Humour—or at least verbal humour—is all about subverting expectations and (in some cases) linguistic norms, which is something you can’t really capture in a statistical model. Besides, the field of linguistics has already provided us with fairly detailed theories of humour (that is, testable explanations of the necessary and sufficient linguistic conditions for a text to be funny)—it would probably be saner to implement these computationally than to throw millions of raw jokes at a computer and hope that it comes up with the same model.

If you’re interested, the Associated Press recently put out a wire story on computational humour (in recognition of April Fool’s Day) that quotes me and several of my colleagues: No AI in humor: R2-D2 walks into a bar, doesn’t get the joke. (I’m the scientist who got “tortured” by ten thousand puns.) Of course if there’s anything you’d like to know about AI and humour in more detail, feel free to ask.

psychonaut that’s fascinating. I have a number of questions, but I think I’ll do the assigned reading before I pester you. :slight_smile:

I have a weird sense of humor that as often as not just gets strange looks. But a few months ago a new person started at my job who self-identifies as non-neurotypical, and I believe it because she thinks all my jokes are hysterically funny.

So I imagine AI’s potentially might not have the same “phase space” of humor that humans do.

A scientist named Janelle Shane has posted some truly hilarious stuff that was generated by a “neural net” she has been training.

Unfortunately for the cause of AI humor, she was trying to teach the computer how to cook (strictly speaking, how to generate new recipes), not how to crack jokes. A far as an artificial intelligence deliberately creating things that are funny, I couldn’t say.

Here’s a subreddit full of GPT2 models someone trained on a lot of Reddit subreddits, one subreddit per model; that is, they used the GPT2 software to learn one subreddit each, where a subreddit is a community with a topic of some kind, and they had all of those pieces of software post to a single subreddit.

One of those subreddits they had a piece of software train on was Jokes: Get Your Funny On! . Here’s one result:

The setup is the thread title: “I like my coffee like I like my women”

Some responses:

[spoiler]Hot and all over my cock.

I like my women the same way I like my coffee, ground up and in the freezer.

I like my coffee like I like my women: cold and black.

With no pubic hair and a lot of sugar?

Without a penis.

Black and bitter?[/spoiler]

And you see a big problem with machine learning AI like GPT2: Overfitting, or, in simple terms, memorizing the input and just repeating what it heard. I’m certain most of that was just memorized, in fact. This isn’t software learning how to be funny, it’s software learning how to parrot stuff.

Hilarious recipe from the linked site!
Beothurtreed Tuna Pie

pastries, fruits, pork

1 hard cooked apple mayonnaise
1 onion
3 tablespoon butter
5 cup lumps; thinly sliced
½ cup chicken broth
1 carrot, spinach (vanilla estach w/pecans)
1 freshly ground black pepper - optional

Surround with 1 ½ dozen heavy water by high, and drain & cut into ¼ in.
remaining the skillet.

Pour liquid into thin baking pan.

Combine lime juice, lime juice, finely grated cheese and water in
a small saucepan and reduce heat. Cover and simmer about 20
minutes at medium-high speed until thickened.

I’ve been looking for something to use up those bags in the back of the freezer.

Am I the only one who initially misread the title as “Can AL create ‘funny stuff’”, since the font used in the forum index renders a lower case ‘L’ and an upper case ‘I’ nearly the same? I was going to say, yes, Al can create funny stuff.

And you can really enhance the radioactivity of your lumps by