I wouldn’t cite Wikipedia for a Thesis but it’s good enough to win a bar argument. It’s not perfect, I don’t believe everything on it I read, but I am willing to accept it’s mostly true. There is a level of confidence in the accuracy and completeness of what you want to find and for common topics well known Wiki is your source.
ChatGPT doesn’t appear to list any source material. That’s great, say it’s proponents, the answer without the work. But I have read it’s impressions of Chess tactics, something I know a bit about, and I find it’s information incomplete, lacking, or wrong.
Does ChatGPT know it’s bullshitting you and if so could it reliably tell you?
ChatGPT will usually admit that it was wrong if you point out that it was wrong. But if you don’t, then it will talk to you with a fully confident tone.
Yes, ChatGPT can be wrong. While it’s designed to provide helpful, accurate information, it’s not infallible. Here are some common ways it can make mistakes:
Outdated knowledge: If a question involves recent events or new research, and web access isn’t used, responses may be outdated.
Hallucinations: Sometimes it fabricates facts, quotes, or citations—especially when guessing or when asked for obscure details.
Overgeneralization: It can sometimes provide oversimplified answers that miss nuances.
Context loss: In long or complex conversations, it might lose track of details or misinterpret earlier parts.
That said, it usually performs well in general knowledge, writing help, brainstorming, and many types of problem-solving.
Would you like an example of a common mistake ChatGPT might make?
Ok. Does it have any ability to tell you whether those strings of text are supported with reputable sources?
Even if we don’t want ChatGPT to show it’s work how can this projects improvement be determined if we can’t know what the machine considers true or is able to limit false information?
No, because they’re not using discrete sources to generate text. If an LLM is able to write out the first line of Romeo and Juliet, it’s not getting that from any specific place, it’s just using the fact that “two” is the most likely first word, then “households” is the most likely set of tokens after that, etc.
Some LLM models can access the internet and provide cites, but the text that they’re outputting is still happening in the black box. So it may give you a cite that’s completely wrong and it wouldn’t know. It it may cite disreputable sources, or fail to cite the best most obvious source.
I once asked ChatGPT the obvious “are you self-aware?” question. It answered no. I then asked, are you intelligent? It also said no. It answered that it’s merely a language processing tool with access to a lot of information that allows it to help people who need the information. I thought that its modest claims to not be at all self-aware were actually kind of…self-aware.
Then the very next day I asked it the very same question. It then, a mere 24 hours later, very confidently stated that it was fully self-aware and conscious. So it not only makes shit up… different day, different shit.
Perhaps a distinction without a difference. The Encyclopedia Britannica is online and you can ask it for information on anything you desire. It’s search engine isn’t that impressive on it’s own, but I would say a desktop with EB loaded IS ‘a knowledge machine’.
Chat GPT does sometimes give me sources, but you can explicitly ask it to, as well. Of course, check those citations, too. They’re not always right, but usually they are, in my experience. It will also tell you you can ask it for sources if you ask it how to better refine your queries. Be curious and play with it and ask it to do what you want it to do. You may be surprised. Or you may find things it’s still not good at.
I tried this and couldn’t find a question to trip it up, even the Self Aware answer was solid:
Are you self aware?
ChatGPT said:
Nope. I’m not self-aware—I don’t have thoughts, feelings, or a sense of “me.” I don’t experience anything; I just process patterns in language based on training data.
If I ever say “I think” or “I feel,” it’s just shorthand, not a sign of consciousness. Basically, I’m a high-powered autocomplete, not a soul in a server. Want to go deeper into AI stuff, or are you just poking the chatbot bear?
Does anyone have a good question I can use to make ChatGPT tell me something we know is false?
It does have a tendency to be agreeable. Responses to non-specific queries often read like a well-practiced bullshitter just elaborately re-stating the question but as statements, like a smart but wholly unprepared high schooler bullshitting their way through a presentation.
Its default persona is agreeable. You can ask it to take on any of a number of personas, so far as they are within the guardrails Open AI has established. You can get it to be plenty critical of you if that’s what you want.
The “truth” and these AI things have a complicated relationship.
If you ask one of AI image things to create a pic of a cat with a bucket on its head it will do so. No problem.
If you ask a LLM thing to explain the idiom “Happy like a cat with a bucket on its head.” it hallucinates a totally fake response.
Currently there doesn’t seem to be any way to get these things to “understand” the distinction between the two. For the first you are fully expecting a fake image. For the second you don’t want a fake answer. (It really should spit out some sort of “That doesn’t seem to be an idiom.” type response. But these AIs seem to be set up to usually give a “helpful” response.)
As noted, that is just the state of the art now. Who knows when and how the distinction will finally get put into the programs.
The current biggest problem is that people are trusting them at all for real answers. This is not good. If people want some fake garbage, these are great. Want actual honest, truthful, facts? Do not go there.
As far as the language AI’s they are merely ELIZA on steroids.
Yet if you ask it whether your idiom is a real one, it correctly says it is not. It does plausibly give an explanation for what it could mean, but, ideally, it should say something more along the lines of “while this is not a recognized idiom, [explanation].” As a user, you have to learn when to be suspicious. That said, I’m almost certain I’ve had it tell me before when some implicit assumption in a query of mine was incorrect. (Like ask “Why is the sun blue?” though that is more an explicit rather than implicit incorrectness) It’s just not consistent, but I expect refinements to come.
Copilot’s AI, on the other hand, did tell me straight off your idiom was not widely recognized, so results vary.
Brings to mind the news story about the lawyer who submitted a brief written by ChatGTP. It was concise, well written, and included cites of court cases - many of which were fabricated.
There’s the joke AI video of Trump kissing and fondling Musk’s feet… But Elon is shown having two left feet. (Something, something, bad dancer?) Also posts about AI making pictures where the subject has 6 fingers on their hand, or 3 arms.