To claim that “Feynman got it wrong”, you would have to first pin down what Feynman’s answer was. But he didn’t answer: He just said that the correct solution was obvious, and the existence of the other two, incorrect, solutions was evidence of flaws in human thinking.
** 14 K = 14 Karats*
** G = Gold*
** F P D = Fine Piece of Jewelry (or sometimes “Fine Piece Display” depending on context, but “Jewelry” is the most common interpretation)*
I guess we have an answer.
I got the same answer when I plugged it in. So I questioned how it could be “jewelry” if that starts with a j, not a d. Here’s what it told me:
You’re absolutely right to question that—“F P D” cannot logically stand for “fine piece of jewelry” because, as you noted, “jewelry” starts with a J, not a D. That interpretation is clearly flawed.
Let’s revisit the riddle more carefully:
“14 K of G in a F P D”
A more accurate and widely accepted solution is:
“14 Karats of Gold in a Fine Pendant Design”
Here’s how that breaks down:
14 K = 14 Karats
of G = of Gold
in a F P D = in a Fine Pendant Design
This makes more sense in the context of jewelry and keeps the initials consistent with real words. “Fine Pendant Design” fits logically, especially given that 14K gold is commonly used in pendants.
Alternatively, some suggest these riddles (known as letter/number puzzles) are meant to evoke common phrases or facts. However, there is no universally confirmed answer to this specific riddle, so some interpretations are best guesses based on plausible phrases.
Why? It’s not even correct. Any responder, be it human or AI, that says FPD stands for Fine Piece of Jewelry cannot be believed in anything. That’s up there with “how many R’s in strawberry?”.
So you keep saying, even though this thread has developed into ample evidence of the exact opposite. Your assumption here, which is naive and completely wrong, is that AI mistakes should be judged in the same way as human mistakes. But as I’ve frequently said, when they make mistakes, they often make completely different kinds of mistakes than humans do. Why? Because they manifestly do not think the same way we do. They might fail at things that humans have been conditioned to find trivial, yet succeed in solving difficult intellectual problems that most humans would fail at. It’s easy to conclude that trivial failures provide some trivial insight into AI operational weaknesses, but that’s just uninformed and misguided thinking.
Because the Op that posted it- got it wrong. So it was a J, not a D, which is why it stumped us.
It is. Clearly the OP had found it in some puzzle or whatever, came here, posted it- wrong- then never came back to correct it or gove the “real answer”. AI found the original puzzle (with a J) somewhere, with the answer,
This makes me wonder though. I took a class in Python programming and one of the lessons that kept coming back to me over and over was that the computer does what you tell it to do. So in this case, how is the AI parsing the question? Is it:
Count the number of occurrences of the letter R in the entirety of the word “strawberry”
or, the more common reason people go to a search engine:
Help me spell this word correctly, are there two Rs at the end?
It may be missing that third R because “how many Rs” is being interpreted as a request for a spelling check, not an inventory.
On preview: I thought 14 k of g in a F P D was 14 kilos of grass in a Ford parked downtown. What?
It would also be weird for the original puzzle to include the first “of” and skip the second. For it to have been the original question, I think it would have to be “14 K of G in a F P of J.” At any rate, I like “fine pendant display,” even if it is forced.
Chat GPT 4.1 gives a number of answers as it thinks it out, all correctly finding words for each letter (i.e. no “jewelry” or out-of-bounds words) and says in the end:
Nope. You are perfectly free to not like things. You’re free to dislike AI. But drawing incorrect conclusions and making incorrect predictions based on not understanding the technology is what’s uninformed and misguided here.
That’s not what’s happening here. No one “programmed” GPT in any conventional understanding of the word. What was programmed was some of the frameworks that created incredibly large artificial neural nets and the training paradigms for them that eventually resulted in ANN patterns weighted by billions and eventually trillions of parameters emerging in those nets. The resulting behaviours exhibited completely new and often unexpected emergent properties that we associate with intelligence, and with remarkable problem-solving skills that are not readily explainable. This is not “programming” any more, it’s more like evolution. We’ve had machine learning for a long time, but this is learning on steroids.
It gets this question right these days. At least the models I’ve tried do. But there will be many other examples in time of it getting something that seems like a simple task wrong. It’s still early in the game (probably.)
If the objective is AGI, it’s going to be early in the game for quite a long time. But the good news is, instead of the unjustified optimism of the 60s and 70s, we now have great working examples of highly intelligent problem-solving, success in human intelligence and knowledge tests, voice and image recognition, pattern recognition, image creation, and so many other aspects of what were thought to be exclusively human behaviours.
As opposed to the good old days, when people said “I heard from somebody that…” or “I saw on Fox News that …” and followed that up with the most credulous bullshit you’ve ever heard…