The next page in the book of AI evolution is here, powered by GPT 3.5, and I am very, nay, extremely impressed

I mean, yes?

Uh, how so?

I’ve nowhere claimed that you do, but a substantial fraction of people concerned with the other issues do.

Bostrom (2002) discusses human extinction scenarios, and lists superintelligence as a possible cause:

When we create the first superintelligent entity, we might make a mistake and give it goals that lead it to annihilate humankind, assuming its enormous intellectual advantage gives it the power to do so. For example, we could mistakenly elevate a subgoal to the status of a supergoal. We tell it to solve a mathematical problem, and it complies by turning all the matter in the solar system into a giant calculating device, in the process killing the person who asked the question.

All arguments about AGI involve thought experiments, because AGI doesn’t exist yet.

That wasn’t to indicate that they take it seriously, but to show that it’s spread beyond being a ‘silly post on a discussion board’. Again, it’s been featured regularly on mainstream news outlets such as BBC.com, Slate, or Business Insider, and made its way into scholarly papers. There even was a recent thread right here, and it’s been brought up in the replies to two recent Straight Dope columns.