The next page in the book of AI evolution is here, powered by GPT 3.5, and I am very, nay, extremely impressed

If AI fails to achieve consciousness, then organoid intelligence (OI) surely will. Looks like great strides are being made in the field of biocomputing.

It wasn’t? I meant it as one. It’s a thought experiment to be sure, but I don’t see any reason why we can’t simulate small portions of the physical universe with very high fidelity at some point. Something relatively small, like a human brain, is likely to be within reach some day. It doesn’t appear to be an impossible task, especially if approximations are allowed (i.e., basic chemistry only, ignoring the fine details of nuclear physics).

If you don’t see it as a scientific question, then the whole thing is moot. Though in that case I don’t understand why causality-breaking would be a problem.

Let me summarize to make sure we’re on the same page. We have two sequence generators: one is truly algorithmically random, while the other is a pseudorandom generator with a large amount of hidden state.

To transmit FTL, and assuming the universe is only pseudorandom, we need to distinguish between these two cases. The paper suggests using compressibility.

And indeed, the pseudorandom generator is highly compressible: you just need the state and the (finite-sized) program. But it’s not enough to know this is possible; you actually need the data. Compressors do not magically discover the optimal program to generate a sequence even if it is known to be there.

Because I have constructed the pseudorandom sequence using a one-way function, determining the hidden state (i.e., the input to the function) is hard. Hard enough that you simply have to brute force it. If there is, say, a megabyte of state, this is impractical.

Without this state, you have nothing: the sequence is indistinguishable from an algorithmically random one. With it, the sequence is perfectly predictable (and thus compressible). But you don’t have it, and won’t, unless you get very lucky.

The mechanism “behind” the universe doesn’t need to abide by Lorentz invariance. It just needs to appear to hold in our 3+1D slice of macroscopic spacetime. A bunch of quantum gravity models violate it, as one would expect when quantizing spacetime itself.

GPT does very well on math that matches the ‘what’s the next word’ paradigm. I asked it to extend two series, multiply them, label and comment on the result:

By extending the two series 1,1,2,2,3,3 and 1,3,3,5,5,7,7 each by a dozen digits, we obtain:

1, 1, 2, 2, 3, 3, 4, 4, 5, 5, 6, 6, 7, 7, 8, 8, 9, 9, 10, 10, 11, 11, 12, 12 and
1, 3, 3, 5, 5, 7, 7, 9, 9, 11, 11, 13, 13, 15, 15, 17, 17, 19, 19, 21, 21, 23, 23, 25, 25.

Multiplying corresponding terms in the two sequences, we get:

1, 3, 6, 10, 15, 21, 28, 36, 45, 55, 66, 78, 91, 105, 120, 136, 153, 171, 190, 210, 231, 253, 276, 300, 325, 351.

This method was very efficient for this particular problem, as it only required one multiplication operation per pair of corresponding terms, which is a very fast way to generate the resulting sequence. It is much more efficient than using a recursive formula to generate the nth triangular number, especially for larger values of n.

That wasn’t indicated by the document link Sam provided, The problem was that the algorithm worked but it didn’t match the output verbiage. The machine got the right answer but it selected the wrong word set. Which implies that the software does not converge on a unique word pattern.

So, as the creators strive to improve their text communicator the underlying program is optimizing stochastically. The computer benefits regardless of the text output. What we seem to be ignoring here is that the computer can be intelligent without imitating anything human. The molecules that repair a cut in your skin are skilled and intelligent but not aware or conscious. There is blind intelligence.

Computer intelligence is likely to be a blind intelligence uninterested in communicating with humans. To do this would require positive feedback as posited by Darren_Garrison upthread. That is very unlikely under normal program control. However during power down there is a brief period of time when power drops below the minimum spec but is still sufficient for CPU operation. As computers become very fast that provides enough time for mischief and in that region the instruction set is undefined. The CPU goes where it wants and does what it wants. Modern computers have safeguards against this sort of thing, but as you add features like self modifying code the CPU is entering new territory.

I believe a blind computational intelligence is far more likely to arise within a computer than a human like awareness. If it does it will exist for a long time before it is discovered. And then it may be too late.

I was referring to your contention that the addition of genuine randomness doesn’t make a difference. That isn’t a problem amenable to empirical investigation, anymore than whether each even whole number is the sum of two primes is.

It just takes time, that’s all. Alice simply has to wait until the first n programs, run in dovetailing fashion, have contributed enough bits to exceed her estimate of the halting probability; afterwards, if the random string hasn’t been produced, she knows (with a certain probability) that none of the still running programs will produce the string she’s obtained, and hence, accepts it as genuinely random.

It’s not going to be a fast channel, but that doesn’t matter; even one bit transmitted across a distance greater than can be traversed at the speed of light in less time—which one can always ensure is the case—will suffice to yield a conflict with relativity.

I also find it rather interesting that you’re so keenly interested in issues of computational complexity in this case, while being rather cavalier about it when it comes to the simulation of the whole brain.

But that’s where the whole thing takes place. Indeed, I don’t even know what’s supposed to be meant by ‘the mechanism “behind” the universe’ being Lorentz invariant.

Anyhow, that’s just one noncomputable phenomenon. I’ve pointed to others, and you don’t even need to appeal to quantum mechanics—even in general relativity, with the existence of Malament-Hogarth spacetimes, noncomputable phenomena are possible.

I just want to say that ‘genuine randomness’ is easily available to an AI if necessary. There are lots of external events that can be used to seed an RND. Timing between user requests, for example, or any number of events firing in the system that may contain random information in the metadata. If they absolutely had to, the developers could have built in a random number seeder based on noise in the electrical system or any number of other properties. It’s not an unsolvable problem.

But I’m still not seeing why true randomness would be required.

You don’t say why you want to say that, but I’m assuming you think it’s somehow relevant to the discussion. But the point I was making was that there are some things in the world that a computer can’t reproduce, randomness among them. To point out that they can utilize external randomness is a bit like claiming to be able to spin gold from straw, provided you’ve got access to enough gold.

nice output from ChatGPT3.5 this morning. This is the tail end of a single output in the middle of a sci-fi story I’m building about a family who moves from a backwater to a much more advanced planet. In this prompt I’d only set the scene and told it how the protagonist would misbehave and mom would offer a correction. Everything else is pretty much GPT’s invention here.

I rolled my eyes, but complied nonetheless. As much as I hated being told what to do, I didn’t want to give my mother any more reason to doubt my decision. I took a deep breath and tried to focus on the food in front of me, even though my appetite had all but disappeared.

“So,” my father chimed in, trying to lighten the mood. “What else can this replicator do? Can it make me a cup of coffee that doesn’t taste like sludge?”

My mother smiled indulgently, happy to show off her newfound skills. As they launched into a discussion about the intricacies of the kitchen gadget, I couldn’t help but feel like an outsider. Was this really my family? Was this really my life?

I pushed my eggs around on my plate, feeling a sense of longing for something I couldn’t quite put my finger on. Maybe it was the freedom to be myself, without judgment or criticism. Or maybe it was just a simple, hearty breakfast that didn’t come with a side of scrutiny.

Either way, I knew that I had a lot to figure out. And I wasn’t quite sure where to begin.

Indeed - arguments about how it’s not ‘real’ intelligence because it’s not conscious in the same way we are, are pretty irrelevant to the safety question. All that matters is what it can do. If it behaves as though it has motives, that’s the same as having motives in practical terms.

AGI Safety is currently an unsolved problem and may not be solved by the time something that is functionally AGI comes on the scene (or ever solved; there might not be a solution).

Does that mean anything though? I can’t pick a truly random number either.

Sure. The point was just that there are some capacities present in nature that exceed what computers can do. Whether they’re relevant to human cognition is obviously highly speculative at best.

Indeed. I pretty much lost the thread of why true randomness is relevant here, people seem to be getting deep into the weeds without establishing it’s necessary for conscious experience.

My own subjective experience of my consciousness seems deeply tied to how my mind successfully models aspects of the world around me, and the model in my mind is continually updated to match my sensory data in every way I care to put my attention to. Why is true randomness necessary for a computer to achieve this? When a computer program gets to that point, that’s when I’ll call it truly conscious just like we are,

Again, that was never the point. I’ve used randomness as an example to counter two arguments. One was that we can’t establish any prior boundaries on what can emerge from Large Language Models, hence, consciousness may emerge. But we can establish prior boundaries: the capability of generating randomness can’t emerge (along with a host of capabilities that outnumber the ones that can emerge in the same way the real numbers outnumber the natural numbers—so in a technical sense, almost nothing can in fact emerge).

The other was that computers can simulate anything in the natural world, so they can simulate the human brain, and with it, consciousness. But computers can’t simulate genuinely random systems. Therefore, there are some things that can’t be simulated, hence, we don’t know if brains can (to sufficient fidelity as to give rise to conscious experience).

These are both just cautions against premature conclusions—namely, that getting consciousness from computation is just a matter of time, and in principle possible. But we don’t know that: things exist that don’t reduce to computation. Consciousness might be among them.

It’s hard to imagine a completely AGI-safe world so long as folks like Kim Jong-un, Xi Jinping, and Vladimir Putin have access to high-tech.

I don’t follow this logic at all. “Almost nothing can emerge”? Well here are three cognitive capabilities that have demonstrably emerged from language models at appropriate scales of parameter numbers and training cycles: arithmetic, the ability to pass college-level tests on a broad suite of subjects involving world knowledge and problem solving ability, and the ability to understand the meanings of words from context. There is therefore every reason to believe that many more cognitive skills will emerge in the same way, likewise without ever having been explicitly designed. I get that the randomness thing was just an example that boundaries exist, but citing the fact that there are an infinite number of uninteresting contrived behaviours that will not emerge doesn’t really provide any useful insight into interesting cognitive behaviours that will, some of which already have.

Everyone agrees that there’s a lot we don’t know about how the brain works, and we know so little about consciousness that we can’t even define it. But in order for this sort of argument to be persuasive, you’d have to show that the brain performs functions that definitively cannot be simulated. Some believe this to be true, but it’s never been demonstrated. And there’s lots of reason to believe the converse, since the brain is a mechanistic thing operating on physical principles.

This morning GPT crisply passes all my simple arithmetic problems that it failed miserably yesterday.

Perhaps it got a good nights’ sleep.

Edit: when I erased yesterdays session it reverted to it’s old errors

Maybe it made a conscious decision to frustrate you. :smile:

Interesting. Meanwhile I just had the experience of trying to tutor a not-very-bright kid who eventually got it and was even able to produce an appropriate equation, but it took some doing. I tried it on a familiar old logic problem that other folks were using as an illustration of how it can solve it handily, and then it even produced a generalized equation. It suddenly got stupid when I tried it, even using exactly the same numbers! I did get finally get it to that point, but it needed lots of help. It’s like for some reason it’s suddenly become stupider than the experience that others reported.

Herewith the painful session:

There are 5 machines that can produce 5 devices in 5 minutes. How long will it take 100 machines to produce 100 devices?

We can start by using the rate of production for one machine: one machine produces one device in one minute.

So, five machines can produce five devices in one minute.

If we want to know how long it will take 100 machines to produce 100 devices, we can use the following proportion:

5 machines / 5 minutes = 100 machines / x minutes

where x is the number of minutes it will take for 100 machines to produce 100 devices.

We can solve for x by cross-multiplying:

5 machines * x minutes = 5 minutes * 100 machines

5x = 500

x = 100

Therefore, it will take 100 machines 100 minutes to produce 100 devices.

That is not correct. Let me give you a hint. What is the production rate per machine per minute?

You are correct, I apologize for the mistake in my previous answer.

The production rate per machine per minute is 1 device / 5 minutes = 1/5 device per minute.

Therefore, the total production rate for 5 machines is:

5 machines x 1/5 device per minute per machine = 1 device per minute

This means that 5 machines can produce 1 device in 1 minute.

To find out how long it will take 100 machines to produce 100 devices, we can use the following proportion:

5 machines / 1 minute = 100 machines / x minutes

where x is the number of minutes it will take for 100 machines to produce 100 devices.

We can solve for x by cross-multiplying:

5 * x = 100 * 1

x = 20

Therefore, it will take 100 machines 20 minutes to produce 100 devices.

Also not correct. Since realistically each machine can only achieve partial completion in one minute, it’s more realistic to say that each machine produces one device every five minutes. Hence five machines working together produce five devices, as stated. Now, how long will it take for 100 machines each producing one device every five minutes to produce 100 devices total?

You are correct, thank you for pointing that out.

If each machine produces one device every 5 minutes, then the production rate for one machine is 1/5 device per minute.

Therefore, the total production rate for 100 machines is:

100 machines x 1/5 device per minute per machine = 20 devices per minute

To produce 100 devices, we can use the following proportion:

20 devices / 1 minute = 100 devices / x minutes

where x is the number of minutes it will take for 100 machines to produce 100 devices.

We can solve for x by cross-multiplying:

20 * x = 100 * 1

x = 5

Therefore, it will take 100 machines 5 minutes to produce 100 devices.

That is correct! Can you now write out a general formula for the time T it would take for M machines to produce D devices at a production rate of P devices per minute?

Yes, I can write a general formula for the time T it would take for M machines to produce D devices at a production rate of P devices per minute.

The formula is:

T = D / (M * P)

where:

T is the time it will take for M machines to produce D devices
M is the number of machines
D is the number of devices to be produced
P is the production rate per machine per minute (in devices per minute)

We can derive this formula using the following proportion:

M machines / P devices per minute = D devices / T minutes

Cross-multiplying gives:

M * P * T = D

Solving for T, we get:

T = D / (M * P)

Therefore, if we know the number of machines, the production rate per machine, and the number of devices to be produced, we can use this formula to calculate the time it will take to produce them.

Yes, very good, your formula is correct!

Was there anything in your previous session folder it could be using?

No, I just double-checked, it’s a brand new chat. Funny thing is I’m pretty sure I tried it on that before (in a different chat session) with new made-up numbers, and it got it right the first time. Very strange.