The next page in the book of AI evolution is here, powered by GPT 3.5, and I am very, nay, extremely impressed

Okay, so a lot of discussion I’ve seen on places like Reddit about the implications of AI programs like this very breathlessly make assumptions about the future capacity of the technology that will doubtlessly be reality within the next ten years, if not sooner.

This doesn’t sit well with me. I can’t see how we can make a reasonable discussion based on someone’s assumptions of accomplishing things the AI can’t currently do. On the other hand, discussing its implications does mean speculating about the future to some extent. It just strikes me like discussion of Trump that start with the basic foundation that he’ll never face political or legal consequences for anything he says or does: it just feels like it’s starting from a predetermined conclusion.

Am I making sense here? Am I alone in this?

A while ago I wanted to see if I could get funny jokes out of it, so I told it, “You tell me a joke, and I will rate it from 1-10. As soon as I type in my rating, give me the next joke. When I say ‘Stop’, print out the jokes in order of funniest to least funny.”

It did all that readily. Some jokes were good, some bad.

A few days later, after I had done a bunch of stuff with it, I asked it to tell me a joke. It did. Then it said on its own, “Could you please rate my joke from 1-10?” That surprised me.

It turns out I was in the same session as when I asked for the joke ratings. But even after doing a bunch of unrelated stuff, it remembered that jokes need ratings.

I posted a link somewhere upthread where OpenAI was saying folks were setting themselves up to be disappointed. They’re not claiming that this is literal AI. They’re not claiming that it can do half the stuff people in this thread are trying to make it do.

That said, the things it can do are veeery impressive. Reasonable ‘next step’ iterations of the technology (like a chatbot fine-tuned on a particular academic database) are veeery exciting. No need to speculate a decade into the future to be impressed.

Thought I’d try some recent SDMB Factual Questions.
Is there a way to connect wireless headphones to this not-very-new TV?

Yes, there are several ways to connect wireless headphones to an older TV that may not have built-in Bluetooth connectivity:

  1. Use a Bluetooth transmitter: You can connect a Bluetooth transmitter to the audio output of your TV and pair it with your wireless headphones. This will allow you to listen to audio from your TV wirelessly.
  2. Use an AV receiver with Bluetooth: If your TV has an audio output, you can connect it to an AV receiver that has Bluetooth connectivity and pair your wireless headphones with the receiver.
  3. Use an HDMI ARC (Audio Return Channel): If your TV has an HDMI ARC (Audio Return Channel) port, you can use an HDMI ARC-compatible soundbar or AV receiver to connect to the TV and pair your wireless headphones with the soundbar or receiver.
  4. Use an analog audio cable: If your TV has an analog audio output, you can connect a cable to the output and plug it into the audio input of a pair of wired headphones. You can then cut off the wired headphones and splice the wires to a pair of wireless headphones.
  5. Use a wireless headphone with a built-in RF (radio frequency) receiver: Some wireless headphones come with a built-in RF receiver that can pick up audio signals from a TV without the need for a direct connection. In this case, you’ll need to connect a small RF transmitter to your TV’s audio output and pair it with your headphones.

Pretty good. #3 is a bit strange and is similar to #2.
#4 is bizarre.

How did they build brick skyscrapers so straight and square before things like computerized laser levels?

Brick skyscrapers were built straight and square before the use of computerized laser levels through the use of traditional building techniques and tools such as plumb bobs, spirit levels, and straight edges.

Architects and builders would carefully plan the design of the building and create detailed blueprints that included the placement of each brick. During construction, masons would use plumb bobs to ensure that walls were perfectly vertical and spirit levels to ensure that they were level.

Straight edges, such as long steel bars, were used to check the alignment of walls and ensure that corners were square. Masons would also lay out reference lines on the floor and walls to guide their work. Additionally, builders would often use temporary scaffolding or “story poles” to check the alignment of walls as they were being built.

These techniques, combined with the skill and experience of the masons, allowed them to build brick skyscrapers that were straight and square even before the use of computerized laser levels.

Not bad, I guess.

Can Cantor’s proof of uncountable sets be done without a positional number system?

Cantor’s proof of the uncountability of sets is based on a contradiction that is derived from assuming the opposite of what is to be proven. The proof does not specifically require a positional number system, but it relies on the concept of one-to-one correspondence, which is a fundamental property of the real numbers. The proof also relies on the assumption of the well-ordering principle, which states that every non-empty set of real numbers has a smallest element. Therefore, it is possible to say that the proof can be done without a specific type of number system, but it still relies on certain mathematical concepts and principles that are independent of any particular number system.

Is the explanation complete?

It’s a mishmash of a bunch of different mathematical concepts. Cantor’s proof is a proof by contradiction, but that’s an extremely common sort of mathematical proof; it could easily have gotten that right by coincidence. One-to-one correspondence is relevant to any discussion of infinite cardinality, including Cantor’s work, but it’s not a “property of the real numbers”; it’s always a comparison between two different sets. It correctly defines the well-ordering principle, but that’s a property of the integers, but not a property of the real numbers, and it’s not relevant to Cantor’s argument either way.

I don’t know what they upgraded but I tried it again on the “interesting facts about 2023” question that it had constantly botched. Nope, it’s still getting very basic things wrong.

These were some of the “facts” presented about 2023:

  • 2023 is an odd number and is the 20th prime number.
  • 2023 is divisible by 3, 6, 7, 11, 13, and 21.
  • 2023 is the smallest three-digit prime number and the only three-digit number that is divisible by the first six prime numbers.

All of this is ludicrously wrong except that 2023 is indeed an odd number, and it’s divisible by 7 (but none of the other numbers listed). Calling it a prime and then listing its alleged divisors (most of which are wrong anyway) is self-contradictory.

It’s still a total disaster on this question.

I tried again with a new session, and this time it had no trouble with Public Enemy or N.W.A. Something must have glitched out in my last session.

And you want cringey? I asked it to tell me the benefits of a CRM system.

…CRM, it’s the future of sales
Helping companies keep the details
Manage your contacts and keep track
Of every interaction, that’s a fact…

I mean, maybe Eazy-E would sing that. But Ice Cube? No way.

what’s 34523 times 87342?

29999944626.

Still leaves something to be desired…

I tried this earlier today and got some really strange responses:

  1. The digit sum of 2023 is 6 (2 + 0 + 2 + 3 = 7).
  2. 2023 is the 554th number in the sequence of positive integers.
  3. 2023 is the 127th odd prime.

Robert Miles has some really good videos about language models from the ChatGPT 2 days; he just dropped a new one. Just starting it now, but I’m sure it’s gonna be worth sharing:

I mean, who hasn’t done this?

huh? You can do this right now, without AI.

Google works kinda good, don’t it?
here’s what I got on my first hit:

For teachers who care about kids learning, this technology is more dangerous than useful. Using chatGP makes cheating easy
But using it for learning is still difficult–because, well, learning is difficult and takes a lot of effort.

If a kid needs help, say, in your example of diagramming sentences–he can put his effort into reading that first hit from google, or put the effort into chatting with the AI bot. But he needs work, to initiate the process, to have the willpower, to want to learn. And if he doe, than he will be successful., whether he uses Chatbot or google.

But the lazy kid who wants to cheat, will just copy everything from the AI chat bots all the time, and never learn .
There are a LOT more lazy kids (who need to be prevented from using Chatbots for cheating), than there are poor-but-motivated kids, who will benefit from using Chatbot as a private tutor.

OpenAI has already released a service where you can drop in some text and it will tell you if ChatGPT wrote it.

But if ‘lazy kids’ just hand in essays written by ChatGPT, then maybe the teaching and evaluation will have to adapt. Maybe the kids write shorter essays under supervision during an exam, or the essays aren’t used for grading, or whatever. Education has been stagnant a long time - maybe it needs a kick in the ass. Also, it might be better to focus on how AI can be used to help kids learn rather than complain about how it will enable cheating.

I wondered how ChatGPT could help home schoolers, so I asked it for a curriculum plan for a grade 1 science class. Here’s what I got:

Then I asked it, “Could you give me a lesson plan for the Life Sciences unit?”

I then asked it to write the introduction to the unit at a grade 1 level;

Next I asked it for a worksheet I could give to the kids, containing 10 living things and columns for the kids to fill in what they need to live.

I can keeo going, generating all kinds of resources for the kids. And if they ask questions, you can get ChatGPT to answer them at grade level.

This is just another example of how these new AIs are going to revolutionize many fields. Home Schoolers will be super excited over this. And no doubt there will be education-specific versions fine-tuned on curricula to be even better.

Well, I’ll note that if you just examine a critter with a magnifying glass for the duration of a single lesson, you’re unlikely to see what it actually eats, especially if you didn’t already know it. But then, I’ve seen plenty of actual human-written published lessons that contain things like that, too. And aside from that, that is pretty good quality.

For sure. The state of ChatGPT currently is that I would want to very carefully check its material before giving it to kids. But it is getting better, fast.

For those who haven’t seen this in the news:
Introducing ChatGPT Plus:

The new subscription plan, ChatGPT Plus, will be available for $20/month, and subscribers will receive a number of benefits:

  • General access to ChatGPT, even during peak times
  • Faster response times
  • Priority access to new features and improvements

ChatGPT Plus is available to customers in the United States, and we will begin the process of inviting people from our waitlist over the coming weeks. We plan to expand access and support to additional countries and regions soon.

We love our free users and will continue to offer free access to ChatGPT. By offering this subscription pricing, we will be able to help support free access availability to as many people as possible.

I read this as “We love the free information we obtain from you and appreciate the opportunity to data mine and profile you”. Of which I think, sure why not, everyone else does already.

Is it working for anybody else right now? I’m getting “403 Forbidden”.

Working for me.
What does 403 Forbidden mean?

HTTP 403 Forbidden is an HTTP status code that indicates that the client is not permitted to access the requested resource. This status code is typically returned when the server recognizes the request, but refuses to fulfill it due to security reasons or because the client lacks the necessary authorization.

No problem with (free) access here. It even remembered my last login.