Yep, browsing through my questions to chatGPT they have been 80% accurate, 10% GPT saying “I don’t know” but in a flowery way, and 10% where there was a significant inaccuracy. Which is pretty amazing, as I only ask it open ended questions, and often after trying and failing to find relevant answers on google.
(On that point: google has gone down a lot IME. It’s not just the pile of sponsored links at the top, but even the non-sponsored links seem heavily weighted towards an interpretation of the query that will drive commerce. I also have a thread on google maps starting to have cracks)
I don’t say this to get into a back and forth of anecdotal data, just to say “YMMV” to the caricturization some are making that LLMs are largely inaccurate.