ChatGPT level of confidence

“[leaning] effusive or over-affirming” and “not flattery but a bias towards a supportive tone” are clean different things, not least because “effusive or over-affirming” is literally what flattery is.

I don’t really know what your question is at this point, but OpenAI wrote a whole thing about the excess sycophancy of their latest model release, what went wrong, and how they rolled it back to a somewhat less effusive version:
https://openai.com/index/expanding-on-sycophancy/

I don’t find that contradictory.

I did a fun experiment a couple of weeks back where I decided to have ChatGPT become my photography critic, but using two very different personas.

I created a persona called “Eleanor” who was a kind and supportive photography critic, and another persona called “Basil” who was a sarcastic and snarky know-it-all who would judge fairly but harshly.

I even was able to produce what I felt were appropriate voices for them, matching their description. Eleanor has a nice firm and confident grandmotherly voice with an American accent and a slight hint of cigarettes in her past. Basil is snooty Brit all the way.

Check it out…

Not a question, just sharing some interesting and I hope relevant stuff on the topic of how confident we can be in ChatGPT. But I was curious about the sycophancy levels shot up so high, so thanks for sharing the link.

(My hypothesis - the people who released it are a bunch of clowns who couldn’t see what was right in front of them - remains largely intact.)

4o is pretty good at the “Drop in on a random Street view location and guess where you are in the world based on visual cues” Google Maps game:

Edited: I didn’t realize sharing the link doesn’t include the images, so here are screenshots: