I know we’re getting sorta repetitive here, but I just want to highlight what I regard as a profoundly important point. By persistently focusing on what you regard as the core operational feature of LLMs (as mere “completion engines” lacking any actual knowledge) you’re completely ignoring the emergent properties that manifest at the unimaginably large scale of many billions of parameters shaping the behaviour of their artificial neural nets. What, after all, exactly is “knowledge” other than the ability to correctly answer questions or, in medicine, develop accurate diagnoses based on a description of symptoms and medical test results? And how is an AI’s knowledge objectively inferior to that of a physician if it reaches the same conclusions based on the same evidence?