A flight simulator is modeling the plane from the viewpoint of the pilot, and is an example of rendering, which I covered, A flight simulator does not attempt to model the entire world. So the flight simulator is like fooling one person, which I agree would be most effectively handled by simulating only things the person sees.
But the simulation in the OP is nothing like this. And that type of simulator is clearly not going to be spinning off other simulators.
[quote=“Mangetout, post:118, topic:755543”]
Nothing especially complex, no. But I don’t think any of us here can claim to have written a simulated universe.
Remember, the speculation that we are simulated depends on the simulated universe spinning off other simulated universes which can create simulators. The type of simulated universe with little VR screens around one or even many simulated people does not do this.
Given the complexity of simulation, I don’t think that the assumption that the simulator writers will be efficient is too tough - we are talking massive amounts of work and time here. Having an efficient simulator inner loop does not involve culture referents. Simulating the world around one entity (or maybe a few) is possible, but creating a consistent shell for billions is a bit tougher. And it might be more than billions for a larger intelligent culture than what we find on Earth.
Exactly that. It certainly does not have to be consistent with the outer world. But G here better be the same as G in Andromeda. Though it could be different in the universe the simulator writers live in.
Everyone else in this thread accepts that a simulation does not have to detail the entire world. You’re the only one with this linguistic hang-up.
Accept that we’re using the word “simulation” to mean “rendering” and move on.
With regard to the last sentence, Putnam was a much-admired philosopher and computer scientist, but his rejection of what has come to be called psychofunctionalism was as misguided as his inexplicable religiosity in his later years. The computational theory of mind has in fact come to dominate much of modern cognitive science, even if it has its detractors – many of whom seem to be philosopher dilettantes like Searle and Dreyfus.
As a general observation, many of the arguments you’re making could apply to any AI, including one that was beating you at chess, or a sentient one expressing genuine existential angst, or better still, a human being – which according to empirical cognitive models is just AI without the “A”. You’re focused on the arbitrary nature of the computational process when you should be focused on its interactions through a well defined interface with a pertinent environment, which is precisely what allows empirical observations to be made and conclusions to be drawn.
With regard to the above comment specifically, the argument might seem to get murkier when that environment itself is virtual, but it isn’t. An intelligent agent that acts a certain way in a virtual environment could be presumed to act the same way in the real world – the roles are completely interchangeable. In a practical sense that’s why simulations have real-world value in testing algorithms or proposed courses of action. As a side note, having spent some time recently playing with the HTC Vive VR system I am vividly conscious of the lack of subjective distinction between virtualization and reality. Those arbitrary 1’s and 0’s, when enabled through an appropriate interface, can be pretty damn awesome!
No, it isn’t like that at all. The problem with your monkey example is that while I might be able to devise an encoding scheme that interprets his existing gibberish as a work of Shakespeare, such an interpretation has no predictive value and will not similarly encode the monkey’s future random typing, hence it can be shown to be spurious. Further to what I said above, if I can devise a suitable interface that exposes intelligent behavior in what was otherwise an apparently arbitrary computational process (or if one already exists), then I have discovered (discovered, not falsely inferred) genuine intelligence. The appropriate analogy here is not a monkey, but a brilliant foreign writer who is apparently typing gibberish, and suddenly understanding the language he’s writing in.
At least in theory, it’s possible—it’s a necessary prerequisite for quantum computation, for instance, since all quantum operations are reversible. Classically, the Toffoli gate suffices for universal computation, and that’s reversible.
In practice, of course, 100% efficiency is at best asymptotically achievable; but at least, there’s no upper limit to that efficiency.
What I don’t buy is that there are any simulated entities at all, self-aware or not: there are simulations of entities, if the right interpretation is imbued upon the physical process subvening the computation, but those aren’t entities themselves.
The problem is really this: if I give you a computer, sans peripherals, with not idea as to how its data is encoded, then it is impossible for you to figure out what computation is being performed by this computer. Or, rather, what computation is being performed is down to an arbitrary choice of yours: using the right decoding, you can conceive of the computation as yielding the decimal expansion of pi, as proving, one by one, all of the theorems of ZFC, or as rendering a beautiful scenic landscape with cows grazing in a field.
So the question ‘what computation is being performed?’ has no unique answer, and its answer depends on arbitrary choices by an interpreting observer. Hence, the question ‘is a conscious entity being simulated?’ likewise has no unique answer. But then, the consciousness of the being in the simulation—should there be one—depends on an arbitrary choice of the interpreting observer. Thus, one could call a conscious being into existence by simply changing the mapping between physical states of the computational system, and the logical states of the simulation!
To me, that’s a reductio of the idea that simulations could contain conscious entities, or in fact, any entities at all. What they do is manipulate data that we can then interpret as being pertinent to certain entities—say, the weather, with cloud cover, regions of high and low pressure, etc. There’s no weather insider the computer, but merely symbolic structures that, by an act of arbitrary interpretation, can be seen as containing data relevant to the weather. But there’s not anymore danger of that yielding a conscious being as there is that a conscious being suddenly exists if I were to read a really detailed description of it: descriptions are not the things they describe.
In light of the above, what you get our of a simulation is a description, not the thing it describes. This may certainly help you answer questions about these things, by producing a description in some novel situation, but should not be taken as evidence for there actually being something like the thing simulated within the simulation—since after all, I could take the simulation to be about something completely different.
So if you read a really, really, really detailed description of a conscious being, you’ll actually create a conscious being? I don’t think my fantasy has such creative powers.
A simulation is not the thing simulated—it can’t be, since different observers may disagree about what’s being simulated, while they can’t disagree about the thing itself, if presented with it. The map is not the territory, simulated water isn’t wet, a simulated digestive system doesn’t metabolize, and so on.
Yes, maybe. Thinking of the world as a simulation is not much more difficult than thinking of the world as brick and mortar, trying to account for matter and its origin. It’s like trying to decide if the world is amazon.com or
WalMart. Either is equally plausible.
I don’t agree. Our simulation almost certainly does detail the entire world.
The cosmic microwave background has existed literally since the big bang. This is a detail that has been consistent and measurable to anybody at any time for 13 billion years. The idea that this sort of detail is merely “rendered” upon the measurement of a conscious entity is then bestowing a sort of omnipresent consciousness on the simulation itself. Which is taking a presumption and adding a second layer of presumption.
Of course, quantum mechanics seems to indicate exactly what I just dismissed; that the act of measurement creates (or renders) reality. How to reconcile all of this I am not sure. But I’m not keen to accept that a clockwork universe that has unerringly adhered to its laws of physics is merely “rendering” itself for conscious entities. Perhaps this is the case, but it seems to add unneeded complexity that does not seem congruent with the travails of science.
Actually, the idea that quantum mechanics saves some computational effort, either due to introducing a sort of ‘finite resolution’, or by creating results only during measurement, is profoundly mistaken: it is in fact much harder to simulate a quantum system than it would be to simulate a classical system. The reason for this is that the number of parameters needed to specify a quantum state scales exponentially with the number of particles, so even for systems of a few tens of particles you quickly run out of computational capacities, while classical systems of tens of thousands of particles can be simulated.
But anyway, quantum or classical simulation: a simulation really is just a description of the system simulated; and a description of a system is not the same as the system. Hence, a simulated world is not a world.
The simulation would be “omnipresent” by definition. The sim is our reality.
It could trivially be programmed to be aware of where its simulated conscious minds are looking, and render those parts of the environment in more detail.
Again, we actually do this in real life, in flight simulators. We don’t need “omniscient consciousness.” Just lasers that track the eye-movements of the subjects.
But with near infinite power usage, even highly efficient simulations will take lots of power - and we need lots of simulations to make the hypothesis in the OP work.
I think you need to define conscious entity. A simulation is clearly more than a description - a simulation of a computer can and does perform computations, which I doubt a description can do. Clearly a simulated conscious inside is not the same as a conscious entity outside, even if the simulation somehow tries to reproduce the “real” entity. But if the simulated conscious entity can wonder about its consciousness, it is then conscious by definition, isn’t it?
Certainly consciousness is not going to pop into being when you turn on the simulation - but if we evolved consciousness, I don’t see why a simulated entity couldn’t also.
As for the weather and the like, our minds don’t experience the weather directly, but rather the output of how the weather affects our various senses. We have no way of knowing for sure whether the weather is real or being fed to us through a test rig.
I’ve done lots of computation using various levels of simulation - damn slow, but more or less identical to running it on the actual computer. If consciousness is a form of computation, then I fail to see why a simulated entity cannot be conscious.
The simulated computer is not the real computer, of course, but we’re not claiming it is.
Is a simulated tree a tree? No, but if it produces output that when fed to the inputs of a simulated entity causes that entity to perceive a tree it is just as good within the world of the simulation, of course.
If you use a transporter to perfectly reproduce a conscious being, do you have a conscious being? If yes, if you use the information you collected to make a copy to feed a simulation at as low a level as you wish, do you have a conscious being inside the simulation? I’d say yes, unless you believe in something unsimulatable like a soul.
When you talk about a map you are talking about a static representation of something. However the simulation we are speaking of may not have any representation in the outside world - certainly we simulate a process before that processor exists in silicon. So if we have a conscious entity inside our simulation, evolved from basic principles from primitive, it can be conscious without being like anything on the outside.
Clearly the simulated thing does not affect the outside world directly. But that isn’t important. A simulated digestive system can provide simulated energy to the simulated being and produce simulated poop.
Consciousness and digestion aren’t things, they are processes. Simulators do processes very well.
I agree, but that is not the question at hand. That is whether a simulated world is distinguishable from the real world by entities within the simulation. For those outside, it clearly is.
I agree, too. I think HMHW is referencing his idea that the whole simulation argument is fundamentally nonsense because of the arbitrary interpretation that can be applied to any computational process, but as I tried to show before, while the latter is true the conclusion doesn’t follow.
The real and fundamental question at hand is whether consciousness can be achieved by computational means, a question that I think most cognitive scientists today would answer in the affirmative. If one accepts this kind of functionalism, then one must also accept the simulation argument. It must follow that one can create and observe a simulated world populated by conscious beings, and one can observe that they are self-aware and have autonomous thoughts and ideas. At some point one might observe some of them arguing about whether or not they’re living in a simulation. The intriguing and fundamental question isn’t about the nature of computation but about the nature of consciousness.
I wonder if it even is consistent though. Not everyone on earth agrees about the fundamental nature of the universe. How about if all views on religion, UFOs, magic, etc are symptoms of that inconsistency?
(I’m not ever so serious with that argument BTW)
I think you’re underestimating the overhead that would come with keeping track of where all the simulated consciousnesses are ‘looking’. After all, in the flight simulator case, you can simply take the direction of the pilot’s gaze as input to the simulation; but in a simulation that’s meant to include its observers, you have to simulate that, as well. So say you shift the gaze of your simulated observer; you must do that based on the data on the ‘reduced’ level of detail. But then, the detail must be increased in such a way that no inconsistencies occur; i.e., the simulation must already include sufficient detail to guarantee that there will not be any inconsistent perceptions presented to the observer. But this means that the lower-detail version acquires an effective dependence on the higher-detail version. Such recursive dependencies are always hard to keep track off, and in a simulation containing many conscious observers, this has to be done for each one of them.
In the end, there likely won’t be any savings via this procedure, and you’re better off simulating the whole deal at full detail. In fact, it’s there that you can expect some savings: after all, you only have to simulate a set of fairly simple physical laws, not try to come up with a set of heuristics doing the same job in such a way as to create a seamless user experience.
I’m not sure this is a problem. As long as you have enough power, you can increase the amount of computation being performed; once power gets scarce, you improve the architecture. In principle, there is no limit to this procedure.
Well, a simulation is perhaps an interconnected series of descriptions, where one description follows from the former by syntactic manipulations—i.e. where the form of the prior description suffices to fix the form of the latter one. But an ‘animated’ description still doesn’t rise beyond the level of mere description. And additionally, one can equivalently well consider the entire history of the computation as a description—a description of how a given thing changes, but a description nevertheless. There is no difference between objects and processes here, because ‘time’ is merely a parameter used to index certain elements of the description.
The argument that ‘a simulation might not do digestion, but it can do arithmetic, so if consciousness is more like arithmetic than digestion, it will be present in a simulation’ is a very often made one; but it’s flawed: a simulation doesn’t do arithmetic at all. You can interpret its outputs as pertaining to arithmetical questions; however, I can interpret them completely differently. If I hand you a black box computer, there is no way you can analyze it and proclaim ‘this computer is doing arithmetic’. As with a description of doing arithmetic, if you use the right code, you can ‘read’ it as doing arithmetic; but with a different, and no less justified code, it might simulate the weather instead.
That’s the reason why computation isn’t sufficient to underwrite minds: in order for a computation to be about any particular thing, one requires an interpreting observer that makes it be about that thing. Thus, in order for a computation to be, say, a model of a mind, one would require a mind to interpret it as such. Clearly, positing such a thing as a theory of how minds work runs into immediate circularity: if that were the case, then our mental content would require being observed by a mind in order to be definite; but said minds content would again require a mind, and so on, ad infinitum. We enter a vicious regress known as the homunculus fallacy.
You’ve performed lots of simulations that you can interpret as being simulations of computers; but your interpretation is just one among an infinite multitude, and not in any way preferred over the others (again, think about how one can interpret the same physical system as an AND- or OR-gate, or indeed, as any Boolean function of two variables).
That’s why consciousness can’t be wholly computational: if it were, it would need an interpreting mind in order to fix its contents; but as detailed above, this leads to vicious regress.
The possibility of teleporting a being does not entail that what’s transferred is only information, and hence, does not support the computational theory. And you need not believe in souls or other mystical mumbo-jumbo in order to reject computationalism.
What’s key here is the notion of structure: a system models another if it has (or can be interpreted as having) the same structure, the same set of relations between its constituents. (In a sense, structure is nothing but information—the ‘differences that make a difference’.) So, for instance, the relation ‘is a direct paternal ancestor of’ can be modeled by books of varying thickness, where the thicker book stands for the ‘father’ of the next thinnest one. All the data one can gather about the ancestor-relation can equally well be gathered using the books: if Moby Dick stands for Jeremiah, The Tropic of Cancer for John, and there are three books of intermediate thickness between them, then I can immediately read off that Jeremiah is John’s great-great-grandfather.
Thus, the set of books models the ancestor-relationship. However, it does so only under the particular interpretation imposed upon it; it might just as well be the ordering of the wealthiest people on Earth according to their fortune. All that needs to change for that is the interpretation. And of course, the reason for that is that the set of books is neither the one nor the other: it’s just a set of books.
There are facts beyond the structural facts that make a thing what it is, and it’s those facts that fail to transfer from the thing itself to a model thereof (and a good thing that is, too, since otherwise the model would just become the thing it models). The reason such fact need to exist is Newman’s problem: the structure, or set of relations, on its own determines nothing about a given set of things other than its cardinality. Thus, if we believe we know more about groups of things than how many there are of them, we must believe in properties not reducible to relations.
And consequently, an actual consciousness is as different from a simulated consciousness as the set of paternal ancestors of John is from the set of books in his library.
I’m not sure this is true any longer; certainly, recent years have seen a changing of the tides on that question, and it’s no longer unusual to see articles skeptical of computational functionalism due to respectable cognitive scientists (and not just pesky philosophers) in the popular science press—here are two examples from the past month, the former by Bobby Azarian, who’s a cognitive neuroscientist, and the latter by the psychologist Robert Epstein. Note that the first one makes a compelling argument along the lines I’ve been advocating, while the second is essentially a complete misunderstanding on how computers work, which I only include to show the rising current of anti-computationalism.
I emphatically disagree. Azarian is, to put it kindly, an insignificant lightweight. And when he makes statements like “The Jeopardy and Chess playing champs Watson and Deep Blue fundamentally work the same as your microwave. Put simply, a strict symbol-processing machine can never be a symbol-understanding machine” he places himself firmly in the Hubert Dreyfus camp of cluelessness. He goes on to claim that “consciousness is a biological phenomenon” which is less ridiculous because it’s a truism, but it’s also irrelevant because the functionalist principle of multiple realizability means that it doesn’t have to be. That is, consciousness as a biological phenomenon happens to be a particular implementation of it, not the fundamental nature of it.
I didn’t even look at your second cite because of your own admission that the guy doesn’t know what he’s talking about.
A more competent example of an anti-computationalist might be Stephen Kosslyn, a theorist in mental imagery who argues for a depictive model of imagery in which it is supposed that we remember images in a sort of literal spatial sense rather than a propositional computational model. But this view is strongly opposed by others, and by and large I think you’ll find that the computational theory of mind is common in modern cognitive science.
Can I at least get you to agree that if one accepts the computational view, then the simulation argument is sound?

I emphatically disagree. Azarian is, to put it kindly, an insignificant lightweight. And when he makes statements like “The Jeopardy and Chess playing champs Watson and Deep Blue fundamentally work the same as your microwave. Put simply, a strict symbol-processing machine can never be a symbol-understanding machine” he places himself firmly in the Hubert Dreyfus camp of cluelessness.
Who was, of course, right in many of his criticisms, as is now commonly accepted, and signaled by the move from GOFAI to things like subsymbolic approaches and the like. (See e.g. the section on his vindication on wikipedia.)
He goes on to claim that “consciousness is a biological phenomenon” which is less ridiculous because it’s a truism, but it’s also irrelevant because the functionalist principle of multiple realizability means that it doesn’t have to be.
Sure, but you can’t very well argue for functionalism being right on the basis of assuming multiple realizability, which assumes functionalism to be right.
A more competent example of an anti-computationalist might be Stephen Kosslyn, a theorist in mental imagery who argues for a depictive model of imagery in which it is supposed that we remember images in a sort of literal spatial sense rather than a propositional computational model.
Kosslyn’s model is anti-computationalist? In what sense? Anyway, to me, Kosslyn falls into effectively the same traps: he projects mental imagery on an internal TV screen; so who is watching?
A recent more mainstream model in which a simulated consciousness will explicitly be unconscious is Tononi’s Integrated Information Theory: only those systems that have a high degree of information integration—essentially, that maximize mutual information across all partitionings of the system—are conscious; however, the integrated information of a computer model is close to zero.
I think he also misses the mark—or rather, his model ultimately doesn’t give an answer to the questions it sets out to answer, since we aren’t told what happens in order to produce subjective experience from integrated information—, but here’s an explicit and mathematically sophisticated model in which one doesn’t need to believe in souls or any sort of nonphysical stuff where simulations don’t call the thing simulated into being.
Can I at least get you to agree that if one accepts the computational view, then the simulation argument is sound?
Even if minds were due to computation, a simulation of a world still wouldn’t be a world, anymore than its description on paper would be. I suppose if by some magic a description of a mind would call into being an actual mind, having experiences and so on, then one might make a case that its experiences would be indistinguishable from those of a real mind, and so it would believe itself to be in a real world—but that just illustrates that if you allow magic, basically anything can happen.
Of course the answer to the OP is that we can’t rule it out.
I would tend towards it not being a simulation for two reasons:
-
Occam’s and burden of proof. It doesn’t matter how reasonable the hypothesis seems, that’s not sufficient grounds for making a positive assertion.
-
It doesn’t seem designed. I, like those around me, spend a lot of my time doing mundane stuff. Now those could be fake memories, but then the hypothesis has simply merged with solipsism. Or these things are just filler in the simulation; but note simulations cannot simulate everything in perfect detail…seems a lot of processing would be pointless.

I wonder if it even is consistent though. Not everyone on earth agrees about the fundamental nature of the universe. How about if all views on religion, UFOs, magic, etc are symptoms of that inconsistency?
(I’m not ever so serious with that argument BTW)
Ed Fredkin used the miracles are bugs argument. But the inconsistency would be consistent, if you catch my drift.

I’m not sure this is a problem. As long as you have enough power, you can increase the amount of computation being performed; once power gets scarce, you improve the architecture. In principle, there is no limit to this procedure.
At the limit, you get God - infinite power and time. God can clearly create as many simulations as he wants. But, can God create a simulation of God with infinite power? And how long does it take for a simulation to reach the point where almost zero-energy computation is possible? Maybe the outer reality has no end of time and no entropy, but do all the simulated universes? Ours does not.
Well, a simulation is perhaps an interconnected series of descriptions, where one description follows from the former by syntactic manipulations—i.e. where the form of the prior description suffices to fix the form of the latter one. But an ‘animated’ description still doesn’t rise beyond the level of mere description. And additionally, one can equivalently well consider the entire history of the computation as a description—a description of how a given thing changes, but a description nevertheless. There is no difference between objects and processes here, because ‘time’ is merely a parameter used to index certain elements of the description.
A simulation is a state machine in a sense, with a state, inputs, outputs, and a next state function. (Which might be non-deterministic.) Is the next state function a description? If it is, how is the next state function of our universe (possibly non-deterministic also) not just a description too. If we are conscious, then you can indeed get consciousness from just descriptions.
The argument that ‘a simulation might not do digestion, but it can do arithmetic, so if consciousness is more like arithmetic than digestion, it will be present in a simulation’ is a very often made one; but it’s flawed: a simulation doesn’t do arithmetic at all. You can interpret its outputs as pertaining to arithmetical questions; however, I can interpret them completely differently. If I hand you a black box computer, there is no way you can analyze it and proclaim ‘this computer is doing arithmetic’. As with a description of doing arithmetic, if you use the right code, you can ‘read’ it as doing arithmetic; but with a different, and no less justified code, it might simulate the weather instead.
Since what we mean by arithmetic is rigorously defined, we can certainly determine that our simulation is doing arithmetic - at least to the point where we finish looking at examples. We can confirm the hypothesis to any level of certainty we wish. Someone can claim that the computer is doing zithing instead, but to a given level of certainty we can show that zithing is equivalent to arithmetic - or the person is just wrong.
But, even more important, we are not talking about black box computation here. Anyone creating a simulator can build in observability points. We don’t just have to look at the Turing machine tape, we can look at its programming also.
Since arithmetic is fairly simple, I suspect that given the structure of a program we can prove it is doing arithmetic. We can’t prove that it is doing some more general type of mathematics, of course.
That’s the reason why computation isn’t sufficient to underwrite minds: in order for a computation to be about any particular thing, one requires an interpreting observer that makes it be about that thing. Thus, in order for a computation to be, say, a model of a mind, one would require a mind to interpret it as such. Clearly, positing such a thing as a theory of how minds work runs into immediate circularity: if that were the case, then our mental content would require being observed by a mind in order to be definite; but said minds content would again require a mind, and so on, ad infinitum. We enter a vicious regress known as the homunculus fallacy.
This argument, and you, supposes the necessity of an outside observer. However if consciousness is the feedback of our thought processes observing our thought processes, no such observer is required. What is basically the “I think therefore I am” argument. Our subconscious minds do the same kind of computation as our conscious minds without being observed in the process. So do animal minds.
Now the meaning of this computation to some extent may be determined by the outside - and there may be no inherent meaning - but consciousness does not imply meaning.
You’ve performed lots of simulations that you can interpret as being simulations of computers; but your interpretation is just one among an infinite multitude, and not in any way preferred over the others (again, think about how one can interpret the same physical system as an AND- or OR-gate, or indeed, as any Boolean function of two variables).
No - we can verify that our simulation is correct against the actual computer. And do, of course. And against the specification of that computer. Of course there are an infinite number of wrong interpretations.
And a universe simulation might not have a meaning outside of itself. Unlike simulating arithmetic, there are no expected outputs. The meaning of the simulation might well be a subject of infinite argument, but so is the meaning of life. The process of the simulation is not.
That’s why consciousness can’t be wholly computational: if it were, it would need an interpreting mind in order to fix its contents; but as detailed above, this leads to vicious regress.
The interpreting mind is the mind itself. Is the only mind in the universe not conscious by definition, then?
The possibility of teleporting a being does not entail that what’s transferred is only information, and hence, does not support the computational theory. And you need not believe in souls or other mystical mumbo-jumbo in order to reject computationalism.
So you think that conscious entity teleported such that all atoms and molecules, to whatever level you wish are perfectly reproduced is no longer conscious? You are invited to describe what is missing if not a “soul”.
What’s key here is the notion of structure: a system models another if it has (or can be interpreted as having) the same structure, the same set of relations between its constituents. (
A sequence of characters has meaning only under interpretation. That is trivially true. But it has no relevance that I can see to consciousness, which is at its heart a feedback mechanism.
And consequently, an actual consciousness is as different from a simulated consciousness as the set of paternal ancestors of John is from the set of books in his library.
Say we meet a conscious alien entity, whose brain structure is totally different from ours. Do you accept that such an entity can exist or do you deny it? How do you test this alien for consciousness? If you can come up with such a test, why couldn’t you apply it to a simulated consciousness?
Each thing has different views. Your books have different views. From the outside we can view our brain as thoughts or as neurons. We can view a simulated consciousness, from the outside, as thoughts or electrons and programs.
I’m not sure this is true any longer; certainly, recent years have seen a changing of the tides on that question, and it’s no longer unusual to see articles skeptical of computational functionalism due to respectable cognitive scientists (and not just pesky philosophers) in the popular science press—here are two examples from the past month, the former by Bobby Azarian, who’s a cognitive neuroscientist, and the latter by the psychologist Robert Epstein. Note that the first one makes a compelling argument along the lines I’ve been advocating, while the second is essentially a complete misunderstanding on how computers work, which I only include to show the rising current of anti-computationalism.
Sorry, the first one is just as pointless as the second. People worrying about strong AI may or may not be foolish, but dismissing the worries because all we have today is weak AI is kind of dumb. As far as I can tell AI researchers haven’t made a lot of progress in doing strong AI in the 45 years I’ve been looking at it. But that we don’t have it yet does not mean it is impossible - it might only mean that weak AI is simpler and it pays better. When I took AI there was kind of an assumption that if you managed to write code to emulate various things we can do, and stitched them together (vision, problem solving, planning) you’d get strong AI. That is clearly not true.
Azarian still thinks the Chinese room argument is brilliant - that sums it up for me.
BTW, him dragging in Turing machines is a sign of his lack of understanding. The implied argument is “Turing machines are simple, consciousness is complicated, therefore Turing machines, and thus computers, cannot be conscious.”
Makes me want to stick an infinite tape down his throat.