On the nature of human vs. artificial intelligence

But, if I’ve given birth to the father of sentient AI, why would Skynet want to terminate me?

Oh, I get it, it would be a ‘Terminator 2: Judgement Day’ situation, where the human resistance reprograms a terminator and sends it back- only instead of protecting Sarah Conner, it’s going to terminate me.

If you consider an AI system an FSM, then learning changes the state transition table of that FSM. (It’s not a good way of understanding it, though.)
I wish computations were as reversible as you think they are. Part of generating a test, or diagnosing a failure, is backtracing from an output back to a set of inputs. This always involves looking through a search space until you find a set of assignments which work. Unless you store states somehow you are not going to be able to reliably reverse a computation.
I just read a bunch of papers for a special issue of a journal on testing AI systems. Several of them noted that the output of such systems can be and often is correct in the presence of defects. (What correct means here is an issue.) This is not due to built-in fault tolerance but to the very nature of the system. That seems to me to be more like a brain and less like an FSM.

I used to use FSM’s in Programmable Logic Controllers in industrial automation. I’m thinking the most states I ever had was around 300. And that was getting on the verge of being unmaintainable.

Right. From a certain theoretical point of view, an entire microprocessor can be considered as an FSM, if you partition out memories and internal buses. But that isn’t very useful, and, as you say, no one designs them that way. I’ve read the netlists of many processors, and I don’t recall a block that was labeled as an FSM, though I’ve probably forgotten a few. There are plenty of counters, which I supposed, uh, counts, though but while they are FSMs people don’t look at them that way.

So from a practical standpoint the complexity of a machine learning algorithm may not be a finite state machine, but the underlying hardware is fixed (aside from spontaneous defects and single bit errors), so if you had a record of state changes you could, in theory, reverse everything to return to a prior state. However, the ‘hardware’ of an organic brain is in constant flux of development and devolution, and even if you had the inputs to backtrack you could never return it to a previous state to even an approximate degree because of how interlinked the connections of learning and memory are.

The Charlie Kaufman-penned film Eternal Sunshine of the Spotless Mind portrays this brilliantly (to the extent that I wonder just how much reading of neuroscience Kaufman had) in the conceit of trying to erase memories of a particular person resulting in the recall and destruction of associated memories. As much as we think of the brain as being a problem-solving organ, it’s real distinction is in being able to capture memories and behavior patterns, not in the way of a microprocessor that stores data in memory chips or on a hard drive, but by modifying its own structure on an ongoing and progressive basis. There is no linear way of reversing that process even with complete state data.

Could you simulate such a system in software? Probably (at least to a certain degree of granularity) but like simulating any real world phenomenon the greater the fidelity of simulation results in a geometric increase in computational complexity and increasingly arbitrary ‘rules’ that provide an analogue to gross behavior but cannot actually capture complexity down to the mess statistical level of small collections of molecules just because of the inherent uncertainties. Which is not to say that a machine cognition system couldn’t develop some credible analogue for self-awareness or ‘consciousness’ (whatever we define that to mean) but it probably won’t function enough like mammalian consciousness to provide a real simulacrum or develop in the way a human brain works. In fact, we can’t even model single neurons with sufficient fidelity to produce something that is truly indistinguishable from a real neuron, and that’s about as fundamental as we can get with computational neuroscience.

Stranger

Well, kind of. There is no way of capturing every state of a real chip, but if you simulate it, and capture all the states, you can do this. It is pretty standard, in fact, to look at a schematic of a problematic part of the circuit and play things back and forth.
If you could capture all the internal states of the brain perhaps you could deduce the new connections. Isn’t remotely possible to do, so who knows.
Now remember, chips are designed to be as deterministic as possible, but we don’t always succeed, and we get nondeterministic behavior not just from defects or external forces. If you have a bunch of signals running together, a 1 → 0 transition on a lot of them could pull a steady one on one of the lines down. You have to worry about the power supply to an area getting degraded by too many transitions on the gates it feeds.
If we didn’t work hard at it, things on chips would be just as nondeterministic as the brain. And we had one cool case where an output line would randomly toggle. The best test for this was letting a system sit at a Solaris prompt. It looks like there was coupling with a power bump, since moving the line away from it mostly fixed the problem, though some people disputed this was the cause.

And remember, when you look at it the right way all hardware and software implements virtual machines. If you consider the AI system as a virtual machine, it can change just like the brain even when the underlying hardware does not. And I could conceive of an FPGA which lets you dynamically rewire itself. At one time we had self-modifying code.
The real difference is that evolution does not do design reviews.

I’m not getting the distinction. Deep learning neural nets have the same interlinked connections. When ChatGPT learns something, it’s modifying billions of values of parameters linked in a giant web, just like the brain. The only difference is that the parameter values in the brain are stored in ‘wetware’ instead of hardware.

I don’t think memory works like that. ChatGPT appears to be like the brain in that memories aren’t stored in a particular location. It’s more like a hologram - cut out part of the brain and instead of losing specific memories you may just lose resolution. There have been cases of people losing large chunks of their brains with no apparent specific memory loss.

I think you just described how ChatGPT does it as well. If you search the hard drive containing ChatGPT’s model, I’m pretty sure you won’t find anything it has read, or even fragments of it. There’s no ‘data store’ of knowledge, just 175 billion parameter values and the links between them. When Dall-E finished ingesting millions of photos, there wasn’t a single pixel of any of them encoded in its model as far as I know.

It seems that it’s the default assumption these days that computation can give rise to consciousness, it’s just a question of finding the right computation. As noted above, I don’t think that’s true—in my view, the idea that computation could create consciousness is essentially a category mistake, because it tries to explain our faculty of interpreting symbols in terms of more symbols; additionally, the model of consciousness I pursue leads to propositions that are computationally undecidable, so even if there were some objective notion of what ‘the’ computation performed by a given system actually is, there wouldn’t be any that could actually do the job, so to speak.

But the model is admittedly rather obscure. So perhaps it’s better to look at a more mainstream and at least somewhat intuitive way for how consciousness could be entirely separate from computation: the so-called integrated information theory (IIT), pioneered by Giulio Tononi. There, the ‘magic sauce’ that makes a system conscious is a quantity called integrated information or Φ, which measures, roughly, the information contained in the whole system that isn’t reducible to the information contained in its parts—sort of the information that would be lost by splitting the system apart.

The intuition behind this is the unified nature of consciousness, which contains information present to us from very different modalities—sight, sound, touch, smell, thought, emotion, and so on. The question of how those get integrated into a unified conscious experience is known as the ‘combination problem’ in the philosophy of mind, and that’s where IIT takes aim.

Notably, in IIT, whether a system computes is orthogonal to whether it is conscious—typical modern computers are very ‘separable’, i.e. they have a small amount of integrated information; thus, whatever program they run will accordingly not have much, if any, conscious experience associated with it. A brain, on the other hand, highly integrates information (and in one bit of empirical evidence for the theory, this integration seems to be what’s lost in unconscious states, such as under anesthesia). So no program run on a typical computer will yield much in the way of consciousness, while even (computationally speaking) simple architectures may rate highly on information integration.

IIT is not without criticism or detractors (personally, I don’t think it’s helpful regarding the problem of intentionality), but it might at least go some length to ward off the notion that computationalism is almost inevitable, or ‘the only game in town’. Computationalism can be coherently rejected without thereby being forced to admit some dualistic notions, or ‘souls’ or the like.

Interesting article about ChatGPT:

https://www.msn.com/en-us/news/us/shock-to-the-education-system-this-ai-chatbot-is-already-writing-school-essays-and-could-rewrite-the-future-of-learning/ar-AA171uCb?ocid=msedgntp&cvid=8f4d914098bf4eeeb9089b79843de05d

“We’re looking at a shock to the education system, technologically,” said Victor Lee, who leads the Data Interactions & STEM Teaching and Learning Lab at Stanford University. “The same way Google outperformed search engines, or Netflix changed our expectations for streaming content … Whether ChatGPT is the exact model we’ll continue to use remains to be determined. But either way, we’re in a new era.”

Later this year, OpenAI is expected to release the newest version of the tool, ChatGPT-4, that some experts claim will be 500 times more powerful than its predecessor.

Our civilization may be at the tipping point of re-creating the plot of Arthur C. Clarke’s Childhood’s End, but instead of the Overlords coming from outer space, they are coming from computer software of our own design. Will this be good, or bad for civilization? Hard to say—this book hasn’t been finished yet.

I know FSM’s finite state machine in this context, but I had to wonder why some posters suddenly started referencing the Flying Spaghetti Monster…

What I wonder - and just put into my latest column - is whether ChatGPT can grade papers as well as write them. That would save a lot of time for everyone. It reminds me of the running joke in Real Genius where, after more and more students stop attending the lecture of a boring professor and leave tape recorders on their seats, he tapes his lecture and you have a tape recorder talking to a lecture hall full of tape recorders.

The solution would be to make it very self-critical, so it gives its own essays no better than a C.

Perhaps it should for the current state of ChatGPT, but with future, more advanced models, it will have to keep adjusting the curve downward when grading humans, compared to it’s own output.

I don’t think it’s a question of “if” AI will outperform humans in every cognitive, judicial, journalistic, moral/ethical, philosophical, etc. process, it’s just a question of “when.”

However, I don’t fear the day AI reaches technological singularity. I think the fate of Earth will be in better hands when AI controls virtually everything. Plants and animals agree.

And, if AI-bots acquire a taste for human flesh, like the Morlocks, I’ll just have to outrun my fellow Eloi. :running_man::running_woman:t5::running_woman:t5::man_running:t2::robot:.

What curve? It just gives its own output a C to keep people from using it to write essays.
The trouble with this is that students are kind of missing the point of an essay test. It’s kind of equivalent to asking what’s going to be on the test, as if material taught which might not make it onto a test is totally not worth learning.

I asked ChatGPT why it can’t browse the internet. I thought it’s response is interesting, worth considering with respect to the dangers of AI, and germane to this post:

Me - “Please summarize the technical and ethical considerations that prevent you from browsing the internet or accessing online information. Please be as specific as possible, thank you.”

ChatGPT - "As an AI language model, there are several technical and ethical considerations that prevent me from browsing the internet or accessing online information. These include:

  1. Technical limitations: AI language models like myself do not have direct access to the internet or web browsing capabilities built into our programming. We operate on the basis of a pre-existing database of information and are trained to respond to user input based on that information. In order to browse the internet or access online information, significant infrastructure, hardware, and software components would need to be added to our programming.
  2. Data privacy: Browsing the internet or accessing online information could potentially expose sensitive data or personal information about users. As an AI language model, it is important to maintain user privacy and confidentiality in order to protect against potential data breaches or security threats.
  3. Internet security: Accessing online information could potentially expose AI language models to malware, viruses, or other security threats. As a result, it is important to have robust security measures in place to protect against these threats.
  4. Online safety: Browsing the internet could potentially expose AI language models to harmful or inappropriate content. It is important to have ethical guidelines in place to ensure that AI language models are not exposed to harmful or inappropriate content and that users are protected from potentially harmful or inappropriate responses.

Overall, there are a variety of technical and ethical considerations that prevent AI language models from browsing the internet or accessing online information directly. While advancements in AI technology may eventually lead to the development of more sophisticated AI systems with these capabilities, there are still significant technical and ethical challenges that need to be addressed in order to ensure the safety and privacy of users."

And yet, Bing does it, and ChatGPT can do it through plugins. In fact, the plugin architecture gives ChatGPT way more power than mere web browsing.

Whether that’s a good thing or not at this time…

I think this is exactly what Frank Herbert (and a myriad of other science fiction authors like George Orwell and Aldous Huxley) have been trying to warn humanity about; “thou shalt not build a machine in the image of a human mind”. I don’t think it’s that the machine mind will take over and try to destroy humanity, like in the Terminator movies. I think the warning is that people will use a machine mind to control other people.

That, and if the machine becomes ‘unpluggable’ it will most likely be because we simply turn critical infrastructure over to it, then can’t shut it down because we no longer are able to run the infrastructure without it. You don’t need terminator robots or AI that dan defend itself - you just need people stupid enough to turn over critical functions to it, then let the old system atrophy.

Consciousness is the use of intelligence and awareness to act like a human being or similar animal. It can be seen that humans can one day learn to act rationally and efficiently instead of expending energy on ego driven distractions like trying to define consciousness as some kind of magical ability that only humans have instead of it being identified as a disability of biological computation.

I understand what all the words mean individually, but I still don’t understand what this means. Would you mind explaining what a disability of biological computation is? Thanks

The idea is that you could have an AI that can do anything a human can, but with no consciousness (like how ChatGPT can hold a conversation or solve logic problems without being conscious); and that if this is the case then allowing for consciousness is inefficient (because presumably it takes extra energy for our brains to be conscious).

It’s a compelling argument, but I will note that if our consciousness evolved, then there was selective pressure that made it evolve and thus, presumably, it is advantageous in some way.