I occasionally watch the YouTube channel of a no-nonsense cook trained as a food scientist; she debunks and tests, rather than introducing new recipes. She often engages her husband and son to taste-test.
This seems related – on NPR’s Morning Edition this morning they had AI write a cookie recipe.
The first recipe it came up with was fine, they described it as a pretty average cookie (I assume the recipe from the back of the Nestle Toolhouse bag was probably in its training data, and it essentially went with something like that). Then they asked it to modify the recipe to make it chewier. It added extra flour, which made the cookies tougher rather than what we would consider chewier.
1 (20-cup) box Devil’s Food cake mix
2 ½ cups whipping cream and divided in half + 2 TBS
1 12oz package semi-sweet chocolate morsels
1 ½ (12 ounces white chocolate morsels
20 Oreo cookies crushed
10 chocolate-covered Oreo cookies chopped into chunks and divided in half
20 sticks of gum divided in half
Sugar
Butter
Vanilla
Chocolate-covered mints
As you can tell, the recipe doesn’t make a whole lot of sense, but Ann Reardon manages to muddle through by ignoring or altering some of the most peculiar instructions (since one box of cake mix is not 20 cups, she just uses 1 box of cake mix).
The most alarming ingredient, the 20 sticks of gum, is not baked into the cake and in fact the directions ultimately say to put one stick of gum on top of the frosted cake (for some weird reason about holding it together).
Her husband and son agree that the result is … unusual … but they say it actually tastes pretty good. Not really like a cake, but a tasty chocolaty dessert.
Yeah, that definitely looks like a recipe an AI from a few years ago would have generated. But my understanding is that GPT-4 will produce an at least workable cake recipe.
I just copied the ingredient list. I think the instructions then go on to state amounts - but they are nonsensical/inconsistent, so it doesn’t much matter. Reardon had several bowls of sugar and cream left over at the end, as I recall.
This is a great example of why big tech like Google and Microsoft, who had been working on AI language models, did not release them. Because they come up with nonsense. Chat GPT stole a march and forced the hand of everyone else. But you can’t ignore the fact that the only thing even slightly related to food the models can come up with is a sort of sophisticated word salad.
As @BigT pointed out, this particular AI effort is two years old so it is especially bad. The biographies it wrote for her husband and son at the end of the video were hilariously outlandish.
I wouldn’t judge the capabilities of the current generative AI models based on this recipe; as I already mentioned, this was produced by an old AI from several years ago. In the NPR story I posted GPT-4 produced a workable, if somewhat average, cookie recipe. Although when they asked it to modify the recipe it came out not great (but still looked like an actual cookie recipe).
True. You can also Google a recipe. That’s not the best use of AI, but it shows you what it is and is not capable of at this stage. I have used it to brainstorm recipes, and it’s proven to be a good assistant for that. AI is awesome if you learn what it’s good at and not (at this stage. I’m more and more impressed as time goes on.)
AI is such a misnomer. Chat GPT4 and the ilk are just really huge pattern recognition algorithms. No intelligence. More importantly, they are not subject matter experts or have any experience.
Only if you insist on a strict prescriptivist meaning of “intelligence”. Here’s the OED definition:
The capacity of computers or other machines to exhibit or simulate intelligent behaviour; the field of study concerned with this. In later use also: software used to perform tasks or produce output previously thought to require human intelligence, esp. by using machine learning to extrapolate from large collections of data.
Also, can you prove that human intelligence isn’t just a huge pattern recognition algorithm?