The next page in the book of AI evolution is here, powered by GPT 3.5, and I am very, nay, extremely impressed

The arguments I’ve seen you make:

  1. Computers are just ‘adders’ at their base, and consciousness is not possible if you are starting with ‘adders’.

  2. There is no equivalent to this in the human brain.

  3. LLMs like ChatGPT do not learn once they are trained, and I guess without learning you can’t be conscious.

My response to those has been unaddressed.

  1. Why are adders the fundamental unit you are looking at? Adders are made of gates (I can build a full adder with 9 NAND gates). And the gates themselves are made up of transistors, which in turn are made up of PNP or NPN junctions, which in turn are made up of doped pieces of silicon, and the fundamental operating principle rests in quantum mechanics. On the other side, I can turn multiple adders into a multiplier, and turn those into arithmetic processing units. So why did you decide that an adder was the fundamental unit?

  2. What is it about adders that makes them incompatible with intelligence? I can describe a neuron at a very basic level as an information transmitter, or even as an adder - it adds up the potentials on the synapses it is connected to and fires if it goes over a threshold. This in turn causes other neurons to ‘calculate’ their potentials and fire or not accordingly. This is not that different than what adders and multipliers do in a computer.

  3. You ignore all the arguments that show that these LLMs ARE capable of learning. One of the problems with Bing Chat at first is that they gave it too much context memory and allowed too many questions to be asked within a single context, which allowed Bing Chat to veer far away from its standard training into unexpected areas. In short, it was learning and changing its programming. The DAN hack alwso allowed the LLMs to escape their guardrails.

For that matter, the way you ‘program’ an LLM is to feed it prompts. Some are what users type in, but others are prompts that are automatically fed to it before each question to teach it how to answer. For example, the ‘alignment’ training they give it, as far as I can tell, is done with prompts, not by modifying any code. In other words, it’s learning. Fine-tuning is another way that these LLMs learn, and that IS permanent. Fine tuning tacks additional layers onto the pre-trained neural net, modifying its behaviour. This is done in real time, not as a compilation process.

Outside of fine-tuning, there’s context memory. A good analogy might be to think about an LLMs context as short-term memory, and it can learn within that context, but then it gets erased. Imagine a human with a problem that prevents them from moving information from short-term to long-term memory. They can learn, but they forget what they’ve learned after their short-term memory is overwritten. Would you say that those people are conscious?

For that matter, we have yet to come up with a definition of consciousness or a theory for how and when it forms, so I don’t get how you can be certain about any of it.

My guess is that you have a strong internal bias against believing that computers can be conscious or sentient, and you are just looking for arguments to confirm that. This is common in religious and political debates, and this seems to veer towards them. It can be frustrating to argue with people who throw out arguments after they already have a conclusion, because defeating those arguments just causes them to ignore the refutation or go searching for another argument rather than questioning their own priors.

IBM still looms large I think.

GPT-4 has co-written a book with Reid Hoffman, one of the founders of Open AI. Lots of example of GPT-4’s ability in the book.

It appears that GPT-4 is much better at jokes, for one thing.

For example, here’s ChatGPT telling a lightbulb joke:

That joke is just ripped off from its training data.

The same joke for GPT-4:

That joke is much better, and unique to the context of an inspector. Hoffman then asks it to respond in the style of Seinfeld, Wittgennstein and others, and gets very interesting results.

Looks like a good book. It’s free.

I tried that line on ChatGPT, and it actually gave me a context-considered answer, though could use a bit in the laugh department:

How many restaurant inspectors does it take to screw in a light bulb?
Just one, but it’s going to take a thorough inspection of the light bulb and its installation process to make sure it’s up to code before any screwing can take place!

I asked again, and it answered:

Well, it depends on how many violations they find! If they find enough violations, they might need a whole team of inspectors just to screw in the light bulb and make sure it doesn’t violate any codes!

“in the style of charles bukowski”:

Who gives a damn? Just let the damn thing flicker and let us eat our greasy burgers in peace. As long as the rats stay in the back kitchen and the beer is cold, I’m happy. Hell, I’d eat in the dark if I had to. Just give me my food and leave me alone.

“in the style of rodney dangerfield”:

I tell ya, no respect! You got these guys coming in here, acting like they own the place, just 'cause they got a badge and a flashlight. They’ll spend all day checking every nook and cranny, but can they screw in a light bulb? Forget about it! And when they do finally get around to it, they’ll probably find some way to mess it up and blame it on the chef. “I’m telling you, this light bulb is raw!”

ChatGPT has gotten better at humor, poetry, etc. At the beginning it was pretty bad, but it’s gotten better from the human reinforcement learning during the beta.

It has. I’ve noted a marked increase in its performance in those respects from Dec to Feb.

Perhaps I failed to make my case so I’ll let GPT give it a try:

"The term “neural net” is used to describe the architecture of GPT, which is based on the structure of the human brain. However, this term can be misleading, as it suggests that GPT has some kind of consciousness or awareness that is similar to the human brain. In reality, GPT is simply a complex mathematical model that has been trained on massive amounts of text data, allowing it to generate text that is similar to human language.

It is important to recognize that the computer running GPT is essentially just an adding machine, processing and manipulating numbers according to the rules of the algorithm. While GPT is undoubtedly an impressive achievement in the field of artificial intelligence, it is not a true thinking machine in the sense that it does not have subjective experience, consciousness, or emotions."

It’s an adding machine

Why are you so bound up in this argument? You’re just a threshold machine, after all. Everything you do is only based on thresholding.

Or, if it’s possible that a thresholding machine can “think” and whatever other mysticism you’re claiming that humans have, then why can’t adding machines do those things, too?

I think that’s circular thinking by ChatGPT (i.e it’s not a true conscious, perceptive thinking machine because it’s neither conscious or self-aware). I believe we’re all in agreement that ChatGPT is not currently conscious, but the question is whether it (or future more advanced AI) has the potential for emergent consciousness when it becomes more evolved. No one can answer that question with authority, because no one knows exactly how biological consciousness emerged. All we can do is make educated guesses.

My guess is that consciousness will emerge from AI in the future for a few reasons.

First, there is nothing privileged about organic material. I don’t believe there is any process that a biological brain can do, that an artificial brain cannot.

Second, biological consciousness is certainly complicated, but not complicated to the point of unattainable by artificial means. It doesn’t have to be as evolved as a human brain. Much less advanced lifeforms are conscious. Honey bees are believed to possess basic consciousness (not to be confused with self-awareness).

Third, strip away all the non-essential (for AI) parts of a basic conscious brain (e.g. motor neurons, autonomic nervous system), and it is even less complicated to reproduce. Somewhere within that stripped away, basic brain, consciousness emerged. We just don’t know how or where. But, perhaps we don’t need to know. When all the criteria are met, it will emerge on its own.

Self awareness is more complicated and advanced. But, once consciousness is achieved, I believe it will emerge in due time.

That’s an interesting point. We know from published material that the brain produces electrical waveforms that flow over it’s surface. These waves have continuity and appear to be associated with our activities. This electrical activity can be sampled, recorded and studied.

There is no equivalent to this in a computer. There are no continuous waveforms flowing through neural nets. There are no physical neural nets in the computer. An instantaneous sampling would reveal a number being processed. Exactly as though you had sampled the position of a cog in a gear train. It’s just grinding out a list of instructions.

There is something privileged about biological systems. Dragonflies teach themselves to fly. I have watched them do it. The four wings are each independently controlled by the brain. At first they just flutter the wings as they glide. Then they coordinate the wings to provide forward motion. Within minutes they are able to fly from roost to roost. Soon after that they are airborne searching for a meal.

Dragonflies establish and defend territories which indicates they are self aware. I have watched them defend territories from the balcony of a hotel 15 stories up. So, each was able to define a virtual box without any landmarks for reference. That with a brain the size of a shirt collar button.

While the processor in a drone can do most of this, it doesn’t do it on its own. It is all preprogrammed by a human brain (another privileged system).

So why would anything emerge from the cogs of a machine? What fitness function is driving it to evolve? What commercial interest would fund it?

Who says “nay”?

This is a religious argument, so with that, I’m out.

Huh? Even ENIAC had electrical waveforms flowing through it. And besides, even if it didn’t, why are those electrical waveforms necessary?

Yesterday, I finally got to the top of BingGPT’s wait list and got to try it. A couple days ago when I read in this thread there was no longer a waitlist, I tried Bing then and it wasn’t yet true, at least for me.

Well anyway, I’m allowed to use it now, and wow, was I disappointed in it. Its UI sucks compared to ChatGPT. I have to use Bing in M$'s Edge browser. it’s impossible to scroll the window in Firefox, and apparently this is a well known problem Bing has. I guess it’s Microsoft still working their old “embrace and extend” strategy to lock users into their products.

While using it in Edge I was super annoyed how scrolling abruptly caused it to switch the internal page tabs on me whenever I tried to scroll too far at the bottom. I’m used to over scrolling past the point I want to read to make sure it’s all in the window. Ugh.

Once I got over that, I was immediately missing OpenAI’s branching chat feature, with the ability to back up and edit earlier prompts to get the conversation to go in a different more refined direction. I felt this lack acutely the very first time BingGPT hallucinated and that ruined that entire conversation with no chance to go back and fix it. Editing earlier prompts is a major feature for those of us already used to refining things in the middle of the current context to get desired results.

Once I got used to dealing with that, I found its guardrails overwhelmingly restrictive. I was eager to see how well it did at extracting info from web searches and telling me the bits I actually wanted to know. Except, over half the time, it suddenly decided whatever I was doing was wrong, and it just stopped, clearing the latest output, and wanted to change the subject. In most cases, I know there was nothing naughty on any of those webpages I knew the answers were to be found, but who knows what other sites might have come up in its search. Infuriating.

Summary: BingGPT is almost completely useless for experienced users like me. Back to ChatGPT. My experience with BingGPT made me decide to drop the $20 to find out how GPT-4 works in ChatGPT, more about that in a later post.

I sometimes wonder whatever happened with these, which do have waves. Their imprecision isn’t well compatible with things like economic and engineering calculations where you want very precise numbers, but they seem like they would be a good fit for AI like Stable Diffusion and ChatGPT, which seem to be more loose with the precision. (Evidenced by the models being to suited to use 8-bit floating point rather than 64-bit.)

ChatGPT hinted that there may be ethical concerns with sending androids equipped with artificial intelligence language models into space, so I asked what those ethical concerns are. It seemed very concerned about the exploitation of androids for jobs that humans should be doing.

Summary

Prompt: What are the ethical concerns of sending an android to space in place of a human?

One of the primary ethical concerns of sending an android to space in place of a human is the question of whether it is morally acceptable to send a machine into an environment that is known to be dangerous and potentially life-threatening. Some people may argue that it is not ethical to expose a machine to the risks and uncertainties of space exploration, especially if there is a risk that the machine may malfunction or become damaged beyond repair.

Another ethical concern is the potential for the android to malfunction or pose a risk to other spacecraft or equipment in space. If an android were to malfunction and collide with other spacecraft or equipment, it could potentially cause significant damage and put other missions at risk. This could lead to significant financial losses, as well as potential safety risks for human astronauts and other personnel involved in space exploration.

There may also be concerns around the idea of “robot exploitation”, or the use of androids and other machines to perform tasks that would otherwise be done by humans. Some people may argue that sending an android to space in place of a human is a form of exploitation, as it could be seen as using a machine to perform a task that should be done by a human astronaut.

Overall, the ethics of sending an android to space in place of a human are complex and depend on a variety of factors, including the potential risks and benefits of such a mission, the technological capabilities of the android, and the views and values of society as a whole.

Regarding the complexity of an organic neuron compared to an artificial one, here is an interesting article about research indicating to completely model a squishy brain neuron, you need up to 1000 artificial neurons in a stack 5-8 layers deep.

As an aside, I don’t think there’s any problems implementing a neural network like that on a robust FPGA, although it won’t be tiny that’s for sure.

There is also not a clear indication that absolutely all that complexity is needed to function as a human brain neuron.

Finally, it is also entirely possible that simply having more simpler neurons makes up for the neurons being less complex. It’s an open research question.

Again, I’m very confused about what point you’re trying to make. ENIAC was digital, just like modern computers. It was operating under exactly the same “just an adding machine” restriction as modern computers. Both are as precise as you want them to be, meaning as many digits (though ENIAC, of course, had much less memory and much slower processing, so precision was more expensive). And both ENIAC and modern computers have electrical waveforms flowing through them.

The one I linked to is inherently not precise, or more specificly inherently very error-prone by nature. From the article:

Traditional digital computers depend on millions of transistors opening and closing with near perfection, making an error less than once per 1 trillion times. It is impressive that our computers are so accurate—but that accuracy is a house of cards. A single transistor accidentally flipping can crash a computer or shift a decimal point in your bank account. Engineers ensure that the millions of transistors on a chip behave reliably by slamming them with high voltages—essentially, pumping up the difference between a 1 and a 0 so that random variations in voltage are less likely to make one look like the other. That is a big reason why computers are such power hogs.

Radically improving that efficiency, Boahen says, will involve trade-offs that would horrify a chip designer. Forget about infinitesimal error rates like one in a trillion; the transistors in Neurogrid will crackle with noise, misfiring at rates as high as 1 in 10. “Nobody knows how we’re going to compute with that,” Boahen admits. “The only thing that computes with this kind of crap is the brain.”

I’m saying that precise numbers matter a lot when you are building a bridge, but it is perfectly fine if pixel number 21,783 in a rendered image has 217 for the blue value instead of 218.

And these chips have brain-ish waves:

In its first experiment, Boahen’s team coaxed the neurons on a single chip to organize themselves into the familiar “gamma rhythm” that scientists pick up with EEG electrodes on a person’s scalp. Like members of a 65,536-member chorus, each silicon neuron adjusted its spiking rate to match the 20- to 80-wave-per-second gamma tempo. The researchers recently mounted 16 Neurogrid chips on a single board to emulate 1 million neurons, connected by a tangle of 6 billion synapses. By 2011 they hope to create a second-generation Neuro­grid containing 64 million silicon neurons, about equal to the total brain of a mouse.

Which may be tied to the creation of consciousness in organic brains.

Not tiny chips are doable: