The next page in the book of AI evolution is here, powered by GPT 3.5, and I am very, nay, extremely impressed

Sometimes if you ask it to repeat you will get a different answer. When I did that this morning it said it had already answered the question and would not repeat it.

Isn’t there a bit of a random seed involved? You never get the exact same answer (counting wording) twice with the thing (or at least I’ve not noticed repeats). There are times it will get complicated questions correct, and then I’ll try in a new window and it’ll make a mistake along the way.

For what it’s worth, when I entered your question verbatim, I got “100 machines to produce 100 devices in 5 minutes.” as the end of the answer, without any further prompting.

That inconsistency is interesting and rather disturbing – I can understand it giving differently worded responses but randomly getting questions of straightforward logic incorrect is bizarre.

ChatGPT is (relatively speaking) in full moron mode this morning. Maybe it’s hung over!

In the very same session as above:

20 machines can produce 10 devices in two minutes. How long will it take 100 machines to produce 100 devices?

We can start by finding the production rate per machine per minute:

20 machines can produce 10 devices in 2 minutes, so each machine can produce 10/(20*2) = 1/4 device per minute.

To find out how long it will take 100 machines to produce 100 devices, we can use the following proportion:

20 machines / 2 minutes = 100 machines / x minutes

where x is the number of minutes it will take for 100 machines to produce 100 devices.

We can solve for x by cross-multiplying:

20 * x = 2 * 100

x = 10

Therefore, it will take 100 machines 10 minutes to produce 100 devices.

You got the production rate per machine per minute correct, but did you forget the formula you previously gave just a bit earlier? It was T = D / (M * P), where T is the the time being asked about, D is the number of devices, M the number of machines, P is the production rate which you correctly computed as 0.25 per device per machine per minute. That gives us T = 100 / (100 * 0.25). Can you guess the answer now?

You’re absolutely right, I apologize for the oversight. Thank you for correcting me.

Using the formula T = D / (M * P), we can find the time it will take for 100 machines to produce 100 devices:

T = 100 / (100 * 0.25) = 400 / 100 = 4 minutes.

Therefore, it will take 100 machines 4 minutes to produce 100 devices. Thank you for pointing that out.

That’s bizarre. I’ve never seen ChatGPT respond that way. I often ask it to repeat important things to make sure it really groks what’s going on in my story scenarios, and to remind it when those important things will soon be passing out of its context window. Are you sure you weren’t talking to BingChat instead?

Even if being able to produce true randomness is somehow necessary for “consciousness” or “intelligence” or whatever ill-defined notion we’re trying to achieve, I think it misses the point to say that “computers can’t produce true randomness”. Nobody really cares whether a computer, strictly so defined, can achieve “consciousness”, or whatever. What we care about is whether a technological device can achieve whatever-it-is. And if randomness turns out to be necessary, it’s really easy to take a computer and bolt a true-random-number-generator to it.

I asked Bing Chat, and here is its answer:

It got the right answer, but through a fairly convoluted process. For me, the logic would have been,

  • If 20 machines can produce 10 devices in two minutes, 100 machines can produce 50 in two minutes.
  • Therefore, 100 machines can make 100 devices in 4 minutes.

But it did work the problem and got the correct answer.

I’ve explained it to you twice now:

But if you indicate what you find problematic, I’ll try and clarify.

That’s getting the logic backwards. The argument was that we can’t put bounds on what emerges, respectively that the relevant physics are computable; these points are countered by showing that there are some things (which may consequently include consciousness) that can’t emerge, and that there are some things (which again may include consciousness) that can’t be simulated. It’s intended as a counterexample to the aforementioned arguments: if consciousness were like randomness, then it likewise couldn’t emerge, and couldn’t be simulated. That’s true whether or not consciousness in fact is like randomness in the relevant sense. Thus, showing that the brain does perform such functions is entirely beside the point (although of course in my model, it does).

Beware of an annoying quirk in ChatGPT interface. I gave it a thumbs-down on the very first version of the question (5 machines produce 5 devices in 5 minutes, how long for 100 machines to produce 100 devices). It then popped up a screen with a revised answer, I said, yes, that’s better. And then I was back in the chat, except the whole lengthy conversation with many different back-and-forths, some of which I documented here, had completely disappeared! All that was left was the one question and the one revised answer. Very annoying, there was lots of good stuff there.

I guess he wasn’t making an argument that true randomness is necessary, but offered it as an examole of something computers can’t do, and that there may be other things they can’t do which precludes consciousness.

And it’s true that there are fundamental differences between human brains and these AIs. For example, the AI is a ‘fully connected’ network, where every node on a layer is connected to every node on the previous layer. ‘Neurons’ don’t exist except as a logical construct, and parameters are more like synapses. In the human brain, a neuron can be connected to up to 1,000 other neurons through synapses. It’s a slightly different architecture.

Also, there are plenty of chemical and biological things going on around the brain that have no counterpart in an AI. Hormones, the limbic system, variations in energy levels from eating or not, a whole bunch of complex structures we don’t understand, etc. Brains run constantly, self-evaluating, pruning, retrospecting, etc. The AIs just run linearly when asked a question, then stop.

So sure, plenty of differences between brains and AI. The big question though, is ‘does it matter’? My answer to that would be, "So long as these things keep improving and keep having increasingly sophisticated capabilities emerge, we have no evidence for that. They have already blown through many goalposts that people have said they would never do, such as exhibiting theory of mind or being able to do calculus.

At some point maybe we’ll hit a plateau and the things won’t get any better, while still stopping short of ‘true’ AGI. At that point it will make more sense to talk about what they lack that keeps them from being intelligent. But so long as they keep improving and showing new higher-level emergent capabilities, it seems premature to talk about what they must not be able to do because they aren’t brains.

No, same old GPT. That was a first for me too.

Yeah, we got that I think. It’s just meaningless. There are an infinite number of things that can’t emerge from a human brain, either. Far more than the things that can. And anything in the universe that emerges must emerge within a set of rules, as emergence needs certain conditions for it to happen at all. That means the set of things that could possibly emerge from a certain condition will always be a tiny portion of all the things that could emerge under different conditions or rules.

And yet everything about us, including our brains, is emergent. I still have no idea why your argument means that the same can’t happen in an AI.

@wolfpup explicitly asked where the ‘almost nothing’ came from, so I reiterated the explanation, so that if there’s still anything unclear, I could try and elaborate.

Again, I’ve never made an argument that the same couldn’t happen with AI. I just pointed to the possibility of that being the case, against your contention that basically anything could emerge, and @Dr.Strangelove’s argument that we can always just explicitly simulate the brain to produce artificial consciousness. The point thus was just that it’s an open question, nothing more—and a genuine question at that.

If you’re asking the same question twice in one session, you won’t necessarily get the same answer, because the prompt is not just your most recent input, the prompt is actually the entire conversation so far, with your most recent input appended to it.

The biggest risk there is not what they might try to do with it, but rather, that they might try to use it, and lose control of it

Exactly. When your agenda is to best your opponent in a race to develop the strongest AI, your concerns for world safety will be minimal, especially if you are a crazy sort to begin with. Maybe AI won’t achieve consciousness (and therefore may pose minimal risk), but OI surely will (I believe), and if the crazies get a hold of that, all bets are off. It’s letting the cat out of the bag, and kitty has the potential to wreak devastating havoc. :cat2:

Was anyone actually saying that they’d never be able to do calculus? I’m skeptical about claims about ChatGPT, specifically,'s current calculus abilities, but we’ve had computers that can do calculus for decades. And we probably could have had a computer that could ace the AP calculus test for nearly that long, if anyone had found it worthwhile to program a version of Maple or whatever to parse the questions. Heck, maybe Wolfram Alpha, as-is, could have done it, except that I don’t know if anyone ever actually tried giving it the test.

It doesn’t even need to be conscious to pose a huge threat; it just needs to be capable, and act as if it is motivated by goals that are important to it.

True.

The reason (I believe) that there have been no nuclear warheads lobbed between nations since WWII is because of second strike capability (MAD). But, with AI, there is no second strike capability, hence, no deterrence.

No one said that computers can’t do calculus. What some have said is that Large Language Models would never be able to do calculus. Hell, some said they wouldn’t even be able to do addition and multiplication, because they are ‘just statistical next word prediction algorithms’.

When did I say that ‘basically anything could emerge’ from a Large Language Model? Also, I hope it goes without saying that we all understand that there are limits to emergence under any system of rules.

What I’m pretty sure I’ve said is that we don’t know what will emerge or when, as it’s unpredictable and related to scale. That doesn’t mean anything can emerge. Emergence happens into the adjacent possible. Things that aren’t possible, or aren’t possible under a certain set of rules, won’t emerge under those rules.

For example, flocking behaviour in birds emerges from a small set of rules. By just inspecting the rules you cannot tell what shapes and patterns will emerge, or why. But that doesn’t mean the birds will emerge the ability to draw faces in the sky, or emerge patterns that compute complex things. Not enough birds, rules too simple, etc. But that knowledge does not allow us to predict what WILL emerge, or why.

I don’t know about that; there are some cellular automata with very simple rules, that can, in fact, compute anything a Turing machine can compute. And we have seen some things in nature that really do act as cellular automata. I don’t think it’s ever been observed in flocking behavior, but I don’t think we can rule out that it could be, given a sufficiently-large number of birds (or boids).