You continue to insist that this is the case even though it has been pointed out to you that both of the results you got from Bard were wrong, and even self-contradictory as indicated above. Neither the first response you posted, nor the supposedly corrected second response provided the correct answer for the question of distance or time, while two posters who actually worked through the problem both provided the correct answer for time. The ‘reasoning’ provided by your chatbot indicated that it didn’t actually comprehend the question and was just producing grammatically and syntactically correct gibberish. (For that matter, even the GPT-4 response that @Sam_Stone provided still gave an incorrect answer despite getting the basic logic of the problem correct because it didn’t grasp the basic procedure of rounding, indicating that it still didn’t have any understanding of what it was doing and was just following a statistical algorithm of answering this question in a form similar to what it was trained on, using the logic implicit in language to perform some simulacrum of performing basic algebra.)
Your mistake wasn’t posting an erroneous response; it was:
- Using a chatbot to do the ‘thinking’ of formulating the (trivial) problem;
- Relying on the result from Bard to be correct without doing the minimal amount of effort to check it or even having the sense to realize that the answer was clearly wrong by orders of magnitude;
- Continued instance that the result from the chatbot was correct and that you just clipped the wrong part of it, even after it was shown that all of the different results it provided were errors;
- And interpreting all criticism as being some kind of elitist cabal intended to keep you from participating in threads where you aren’t an ‘expert’ even though the vast majority of posters (myself included) are not any kind of expert in the myriad of topics that are discussed, nor has anyone demanded more than a basic verification of purportedly factual information.
You have made the mistake that a disturbing number of people are doing in assuming by default that results from any LLM are authoritative and correct even though they are just the result of processing natural language prompts through a statistical model to provide a syntactically cromulent response without any actual fact-checking mechanism. This is understandable because what these models are being trained to do is to provide authoritative-sounding responses to prompts in order to ‘fool’ users into the belief that they are interacting with an actual cognitive system with a theory of mind about the user. Where you fell off a cliff was in your continued, vigorous, obtuse insistence that Bard gave the right answer even after it was thoroughly demonstrated to be wrong (@Chronos even showed his work) thereby sidelining the actual discussion.
The appropriate response would be to acknowledge that current ‘AI’ chatbots, and LLMs in general, are not the right tool to perform even simple calculations reliably and cannot be trusted to give a correct result. Instead, you have doubled and tripled down on insisting that it was right and you just copied the wrong bit, and that you are being persecuted for being ‘honest’ which is risible given that you essentially plagiarized your completely unsolicited initial response from Bard instead of doing a simple multiplication problem yourself. You are an exemplar for how increasing reliance on these unvalidated and unreliable tools is making us all dumber and more prone to bad decisions based upon false confidence in unverified and often wrong ‘facts’. That is your ‘sin’.
Stranger