The next page in the book of AI evolution is here, powered by GPT 3.5, and I am very, nay, extremely impressed

Yes, even then. In part because the response to any event already has some of the expectation built into it, and it’s impossible to predict that interaction.

My company has had a number of quarters where the stock went down after beating estimates. Because–presumably–we actually didn’t beat estimates by enough. There was somehow already the expectation that we’d beat estimates, and we didn’t do good enough on top of that. The reverse happens too: the stock price goes up in spite of not beating estimates, because the rumor mill or whatever was expecting worse.

In short, none of this cause-and-effect is really predictable even in “obvious” cases, let alone ones where the relationship is less certain.

Let’s not get hung up on the stock’s reaction or lack thereof. The fact remains that Google put on a demo bragging about their AI, and it shows the AI giving the wrong answer. That’s ridiculous.

How can you presume that? It was probably because something else completely unrelated to your company.

Well, it’s not so silly that it gave the wrong answer. ChatGPT gives lots of bad answers, too. Mostly just embarrassing that they were scrambling so quickly that they couldn’t even vet their own demo.

It could have been anything. That was my whole point–that a cause-and-effect relationship has not been established. That includes cases where the “obvious” thing has happened.

My point is that if it was completely random you couldn’t presume anything, as you did.

I would paraphrase your response as “you can’t presume a link between those events.” Then you follow with “I presume the price movement is linked with this other thing.”

Right. Giving wrong answers is the reality of these chatbots today. But showcasing that is absolutely an unforced error. It doesn’t make me doubt their technology, but it does make me doubt their commitment to doing it right, versus what is obviously a rush to release.

I said “presumably” to indicate that it was just one possibility out of many (just the one I found most likely).

The outcome isn’t necessarily random, but it is unpredictable. So anyone claiming–or deceptively implying without quite saying so–a cause-and-effect relationship is doing so without real evidence.

Cool. I’ll be sure to amend all future posts with “this may be what I think, but I am not omniscient and other things might be true instead.”

Don’t want to take this digression too far, but it’s a reasonable presumption because it’s very, very common for stock to react counterintuitively after either an earnings announcement or some other news, and the very common reason for this is that many major investors already knew (more or less) what was going to be announced. In market parlance, that information had already been “discounted”.

You obviously got the idea from the multiple news articles claiming the relationship (like the one I cited). If I’m wrong about that, sorry. But financial news especially shows this pattern of “X plummets/booms after Y event” and they never establish the cause and effect.

Yes, it is reasonable. However, he just got done explaining how presumptions are impossible to make.

If I wanted to convey certainty, I’d simply have said:
Because we actually didn’t beat estimates by enough.

But since my point was to convey uncertainty and unpredictability, I instead said:
Because–presumably–we actually didn’t beat estimates by enough.

Furthermore, even if that were the correct explanation, it would still be useless to cite as a cause, because it’s impossible to know what the true expectation was. It’s unlikely to be the same as the estimates. But whether it’s better or worse depends on investor psychology and a million other things.

nevermind

If you are given a service for free, then you are the product. :slight_smile:

Is there a reliable thing you can type that will get ChatGPT to always put a program in a code window? It frequently will just produce its code for me in the regular window, losing all the nice indentation and formatting.

Have you asked it to put it in a code window? I’ve done that for ASCII art where it sometimes puts it in a code window, sometimes not, and sometimes half and half, and that has worked.

Yes, I’ve tried many different phrasings of code window and code block, and more than half the time it starts off without one. Maybe half the generations when it starts with no code block window, it suddenly notices “hey I’m doing code” partway through and does the rest in the window. Occasionally it will flip flop back and forth between regular output and a code window.

…I’ve just jumped into it and asked it this:

ME: write a single sentence locked room murder mystery

CHAT: When the police arrived, they found the victim dead in a locked room, with no signs of forced entry, and a broken window in the corner that was too small for anyone to have entered or escaped.

ME: Thanks can you try again, but this time reveal the murderer

CHAT: When the police arrived, they found the victim dead in a locked room with no signs of forced entry, and a broken window in the corner that was too small for anyone to have entered or escaped, except for the murderer’s cat, which was seen leaving the room shortly before the police arrived.

Made me LOL.

It’s been a long time since I was able to log into ChatGPT. I assume they’ve massively scaled back how many concurrent users they’ll allow, the better to push for paid plans.