And you (and I for that matter) are just an automaton, running a program developed over billions of years of trial and error designed with one goal in mind: self reproduction. Anything else you experience, which does not further that goal, is an evolutionary byproduct.
The fact that your brain is made of meat, not silicone, is hardly the trump card some seem to believe.
Well, not exactly; The embryonic process that created my brain developed through evolution, but my brain created and programed itself through interaction with its’ environment. I am an automaton running my own program which includes frequent updates.
Oh yeah, and my brain is an electro chemical parallel/serial analog processor not silicon based.
Neither this nor what preceded it address the point. What is magically different about organically generated chemical/electrical signals that is categorically different from identical chemical/electrical signals generated by a non-organic sources?
Your computer analogies are intended to disprove any comparison to human brains, but I’m just seeing the opposite. A computer is created to run programs; that is its only purpose. Referring to the relationship that a computer program has to a computer as ‘parasitic’ is an interesting viewpoint, but it doesn’t seem accurate. And a computer very much has influence on what the program does, or what the program is capable of doing. A standard PC style computer you purchase will have a CPU with some firmware, and might come with a hard drive with an operating system already installed, yet otherwise blank. A newborn baby will have drives and instincts (the firmware and OS) but no experiences, skills or memories yet acquired (the installed programs and files).
1, I believe the brain is computational, though there are other things going on, such as electrochemical reward and punishment systems, that are not strictly computational. 2, this may be true; the information in a human brain may not be in a transferable form. My point was, assuming it would be possible, we have the accumulated memories, skills and experiences of a human brain that has had the benefit of millions of years of organic evolution. Which is installed in a computer that can simulate all of the experiences and electrochemical emotional processes it’s used to. Is it still conscious? The argument you and some others have made, which I think could be a valid one, is that it requires the entire organic, evolutionary journey we humans have made to be fully conscious. So in this case, does consciousness remain, or is there still something special about the organic nature of the brain that only allows for it?
Oh, come on. That kind of overly simplistic comparison is beneath your other arguments. Likewise the sex doll stuff.
I agree. It would also likely be a far cry from a human. But again, I don’t think there’s anything special about an organic system that would exclusively lend itself to experiencing ‘consiousness’.
To further dig into your analogy, I think it is a false analogy as you say, in the sense you’re describing, to compare an ordinary computer to a human mind. An ordinary PC style computer is a Swiss army knife, with spreadsheet programs, photo-manipulation programs, word processing, and so on, each with its own tasks and purpose. A human mind has different ‘programs’ running, but they are all more like subroutines dedicated to one main purpose- keeping the body alive and able to reproduce itself. The different ‘programs’ or ‘subroutines’ in the human mind are governed by a higher brain function (the frontal lobe?) which functions as a ringleader and gives us the illusion that we are one conscious mind.
How so? We are discussing consciousness in the context of simulating human activity. Current commercial development in sex mannikins is state of the art user interface. It’s a $30 billion market.
Well said. Whatever personality the computer displays is in the externally supplied program.
The point I am striving to make is that consciousness is an emergent property of a self organized system. Consciousness is not something that is added to the brain. Consciousness is the brain.
I don’t believe we ever got around to agreeing on a working definition of consciousness.
How about: Consciousness is a state of environmental awareness sufficient to allow performance of non-arbitrary volitional tasks.
Human consciousness is the product of billions of years of evolution. There is no way we will be able to exactly replicate this phenomenon in the next few decades, or even centuries. Unless we can manufacture some highly-competent non-human artificial minds in the interim, who might be persuaded to perfect the task for us.
My. ‘oh, come on’ comment was mainly directed toward your rhetorical ‘If the figures in the wax museum are perfect reproductions are they alive.?’ Uh, no.
As for the sex doll argument, I confess I’m not up on the latest tech in that area, but I’d guess the ‘realism’ bar is quite a bit lower than the need for having advanced AI to convince the doll’s companion that it’s real enough for intended purposes. That said, if we ever do get to a point where realistic humanoid animatrons equipped with Turing-test passing AI did become readily available, ‘sex doll’ would probably be a very popular use for them
‘Emergent’ is a word that has been used pretty frequently in this thread, by you and others. I still don’t see why a sufficiently advanced artificial system, that is designed and programmed to be self-organized and self-learning, cannot eventually develop an ‘emergent’ property of consciousness itself.
Seems like it might be relevant though. If all of science fiction is any indication, the main uses cases for AI will be a large sarcastic robot if it’s male and a weaponized sex doll if it’s female.
An automaton mimicking human behavior, no matter how convincingly, is not necessarily experiencing human sensations or feelings. In fact, you probably wouldn’t want it to. I don’t want my self driving car to feel bored taking me to work every day. Or my combat drone to experience PTSD. Or my sex doll to feel shame and revulsion because I happen to be down with some funky shit.
But I suppose that’s where you get into the philosophical questions regarding human consciousness vs a highly convincing AI. If we make a AI that is highly realistic facsimile of human consciousness, is it really conscious or are we just projecting some sort of anthropomorphism onto what is really just a fancy machine?
Now that’s a good question, but you can ask the same for a person learning a new language the traditional way. When does that person “know” a language? When they think in it? When they don’t translate into their native language?
The original Chinese Room problem was posed to attempt to “prove” that computers cannot understand. It’s a reductio ad absurdum argument in a sense. I’m not sure if it is still true, but a lot of the arguments against AI were made by philosophers who deep down considered computers as just adding machines, and didn’t understand how systems can adapt to their environments.
None of this proves that hard AI is possible, it just fails at proving it isn’t.
I have heard the argument that AI is js just a statistical process, that transformers simply find the best word, then the best word after that, and the best word after that, until the output stops. That’s pretty much true. But how do we know that’s not exactly what we do?
When I’m talking, I’m rarely individually choosing words. I just talk. Only sometimes will a person pause to consider the best word in a conscious way. Walking is a process of the brain deciding what the best muscle action is at this moment, then the next, and the next, and the next. It’s all system 1, below conscious level. Maybe it’s a fundamentally different process, but I have my doubts.
That’s interesting. The next word that you say, the next thought that you have, is constrained based on previous words and thoughts. Not being constrained is the symptom of a problem. Even writing comedy or a surreal piece, choosing a word which doesn’t naturally follow but which isn’t random is a real challenge.
If you have word or sentence completion turned on for text or email, it works remarkably well with so little training on what you write.
Asimov, ISTM, was only using the Three Laws of Robotics as a plot device, but the real part of the story was the gotcha where he showed how the machine brain twisted one or more of the laws to allow for the terrible or suprising “out of character for a robot” event. Those laws are, of course, far too advanced, philosophically, for a machine brain, especially at the time Asimov postulated them. I think as the years went by and he decided to merge some of his series, he needed to tweak the three laws. That’s what got him to write into his stories what he called the Zeroth Law.
What if you wrote out the AI program’s code on paper and performed all the calculations manually? If a team spent 100 years writing out enough code to “run” the program for a day, has the paper experienced one day of consciousness?
It’s infeasible and maybe even impossible in reality, but it should be possible in theory I believe.
Not really; the ‘code’ of a deep learning system is actually a weighted neural network that has learned preferential paths to achieve some end goal but why those paths give the preferred solution is not clear for any complex problem such as image recognition. A team of programmers could not just come up with a set of rules analogous to the machine learning system because no finite set of individual rules is ever going to bound a problem like operating a motor vehicle on public roads.
However, one of the fundamental differences between a digital computer and a brain is that while the former can be made to perform operations very quickly and efficiently (down to the level of thermodynamic limits), it is a finite state machine that also has limitations about what it can represent; for a digital computer, all concepts are data that has to be binary in nature and all operations are very reversible, again potentially down to thermodynamic limits. In contrast, human (and other) brains are extremely thermodynamically inefficient in the way that process and store sensory inputs but they are extremely adaptable and are quite definitely not finite state machines as they are constantly altering in structure, both by genetic and epigenetic ‘programming’ (especially in development) and just fundamentally how memories and behavior patterns are encoded.
You could ‘in theory’ produce algorithms in software operating on top of digital hardware that could emulate the adaptive processes of an organic brain (albeit to questionable levels of fidelity, but the process of representing such an ‘inefficient’ system in a finite state system will just amplify the computational costs to a point that I’m not convinced that a software simulacrum running on top of digital hardware could actually produce a ‘human-like’ consciousness. Which is not to say that a machine cognition system could not produce something like actual volition or awareness of its physical extent (again, by what measure or method?) but I don’t think it would behave much like a human brain.
I agree that the two are not the same, but I’m not convinced it matters. When you have 200 billion parameters, each of which is a 32-bit floating point number, I’m not sure that’s a substantive difference from 200 billion neurons with analog state. Maybe it is, maybe it isn’t. But the fact that we built a digital neural network and trained it on data, and human-like capability emerged suggests to me that the analog/digital divide isn’t so large.
The emergence is what keeps me thinking. Build a 5 billion parameter LLM and let it hoover up the internet and its response is gibberish. Random. Not just worse than the larger models, but completely non-functional. Increase it to 25 billion and do the same, and get the same result.
But at some number of parameters, without changing any of the code in the system around it, suddenly the thing can do poetry and write computer code and understand natural language and talk to people. It happens quite quickly and unexpectedly, like a phase change in a complex system, which is what the brain is.
I’m not saying that proves anything, But it should make it clear that the abilities of an LLM are not encoded in some algorithm that can be inspected and modified, and that thinking of it as just a bumch of computer instructions is probably not that useful. Like the human brain (and many complex systems), you can’t use reductionism to figure out how they work. Sure, you can study the biology of neurons or study transformer code, but the system you are interested in simply doesn’t exist at that level. There’s nothing about studying a neuron that will tell you what someone is thinking or even where their thoughts and memories reside in the brain. And if you try looking at a higher level it becomes impenetrable.
GPT-4 is rumored to have 100 trillion parameters using a new sparse tree model. It may also have the ability go do multimedia, incorporating text, pictures, audio. And yet, it may not have any new ability at all. Or just modest improvements. Or maybe it will be sensationally better than GPT-3.5 at 200 billion parameters. We won’t know until it’s trained and available.
This seems to be the best answer so far for why machine computing is fundamentally different than organic cognition; that it’s not simply a matter of building increasingly complex computers until they achieve something akin to sentience. But though I was good with the first paragraph, I admit I didn’t understand the second two paragraphs fully. Been googling ‘finite state machine’ this morning
I also took a look at the link to the ’ Biophysics of Computation’ book on Amazon. Considering buying it to help myself understand the subject more thoroughly, or maybe gift it to sonlost II. But then I realized, pointing my son in the direction of making AI more like a human brain might be a very bad thing. We of course wouldn’t want AI to be conscious / sentient. It wouldn’t have any upside for us, and in fact would probably be unethical. We just want it to be highly capable to suit our various needs, but have absolutely no volition of its own. I don’t want to be partly responsible for my son creating Skynet
Your son will create systems that change the question. Computer consciousness will be tossed into the pile with alchemy and he will deal with things currently unnamed.