Not really. Logical transformation of information, whether by analog processes or digital processes, doesn’t “feel” like it leads to consciousness.
I can see how it can lead to intelligence, which just seems like effective modeling of the environment allowing for good predictions and problem solving in support of some specific goals. I can see how flipping pieces of paper on the floor can produce output that I would consider “intelligent”.
But I don’t see how flipping pieces of paper can result in consciousness.
Sure, you can find them analogous in certain ways.
You should stop trying to describe how computers work, you don’t have the details right at all, nor does it matter any more than describing how atoms work has anything to do with the capabilities of an op amp in a larger circuit, or in the neurons in our brains.
In my minority opinion, your printer, within it’s own environment, is close to conscious. How close? I’m not sure, it might even be conscious by my own definition and the given constraint.
Not knowing the specifics of your printer I’ll use my computer as an example. It can tell whether it is running on battery or is plugged in. It can tell how much battery it has left to use, what the CPU temperature is, what the ambient light level in the room is, and some general concepts of how much power other peripheral components use. And it is also keeping track of how much I use it. When it goes on battery power it adjusts power consumption to maximize battery life according to established parameters and all of the various information it has, including the history of usage, so there is a subjective result based on the environment and experience.
I don’t yet have a good definition for how much more it takes for my computer to be conscious within it’s own simplistic environment, but I see that as a good starting point for examining the nature of consciousness. Clearly though, that extra something that goes beyond the basic processing of sensory input to make a conscious entity will be far more complex in a human level consciousness than in an artificially constrained one.
Crane I think we’re missing the woods for the trees here.
Even if we concede that a computer could never simulate the human brain due to some fact of it being digital or serial or whatever, we still don’t have grounds for saying a computer can’t be conscious. Because we have no reason to assume necessary requirements include things like being analogue.
Computers can perform lots of functions that previously could only be performed by human brains.
So it is insufficient to say “Human brains are conscious, computers are unlike human brains, therefore computers can never be conscious”, the logic doesn’t follow. Because we could have used the same logic to say, for example, that computers will never perform facial recognition.
Thanks for the comments. Sorry, perhaps I was more thinking out loud than constructing an argument.
We do need to identify what we mean by both ‘computer’ and ‘consciousness’. Perhaps a good computer example is Watson. IBM intentionally took on an impossible task to demonstrate the ability of their system. It’s ability to play Jeopardy was amazing. It not only played the game it was good at it. And, the game was played only by doing what humans do. Voice input/output, and Watson had to push the button, just like any other contestant to answer a question (statement). I know that Watson is made up of many individual computing modules. I have no idea how it is organized or programmed, but I do know that it uses the same CPU adder technology as all other numerical computers.
So, what is the definition of ‘conscious’ and does Watson meet it? It does not meet mine above.