You think LLMs have “the power of comprehending, inferring, or thinking especially in orderly rational ways”? And that they are “sane” and “properly exercise their mind”? Again, KD6-3.7 is a fictional character living in a fictional universe in a science fiction movie.
When discussing whether an LLM can reason, I’m not sure I give a flying fuck what ChatGPT says, since I know for a fact that it isn’t “comprehending”, “inferring” or “thinking”, “especially in orderly rational ways”. If you think it seems like it does, that’s a you problem. ChatGPT’s response was also full of shit. Its very first example of what it can do mentions “chain-of-thought-reasoning”. Since you like using an LLM to make your arguments, I’ll let another LLM chew that particular example up and spit it out, in a statistically predictive manner.
Do you literally do chain of thought reasoning?
That’s an excellent and highly debated question in AI research!(strikethough mine: I hate this kind of shit as it makes rubes susceptible to granting the LLM a whole host of human-traits which they don’t actually have. “Your eyes are a lovely blue and yes, you should indeed push that flashing button that says ‘Warning’ on its face.”)The short answer is: No, not literally in the human sense.
LLMs don’t possess a human-like mind that decides to logically break down a problem step-by-step. Instead, they use the technique known as Chain-of-Thought (CoT) prompting as a highly effective trick to improve their output.
LLMs are not “reasoning”, “logically inferring”, “thinking” or anything of that nature.
SInce this is the pit, are you fucking serious? I’ve read many of your posts over the years and don’t recall a single time where I thought you were anything other than intelligent. But when it comes to LLMs, you can’t seem to help but using the most flowery flattery to describe a well constructed word predictor. I fucking LOVE AI (not LLMs in particular) and what is possible. I know for a fact that it has been game changing in many fields and will continue to become even more valuable in those and many others. With that said, LLMs are stupid (in the human sense) fucking models that are good at statistical prediction. When it comes to you and LLMs, it’s like arguing with Dr. Strangelove about Tesla or Babale about the shit going down in Gaza. Slow the fuck down with the superlatives. It’s a bunch of silicon.