It seems that it’s the default assumption these days that computation can give rise to consciousness, it’s just a question of finding the right computation. As noted above, I don’t think that’s true—in my view, the idea that computation could create consciousness is essentially a category mistake, because it tries to explain our faculty of interpreting symbols in terms of more symbols; additionally, the model of consciousness I pursue leads to propositions that are computationally undecidable, so even if there were some objective notion of what ‘the’ computation performed by a given system actually is, there wouldn’t be any that could actually do the job, so to speak.
But the model is admittedly rather obscure. So perhaps it’s better to look at a more mainstream and at least somewhat intuitive way for how consciousness could be entirely separate from computation: the so-called integrated information theory (IIT), pioneered by Giulio Tononi. There, the ‘magic sauce’ that makes a system conscious is a quantity called integrated information or Φ, which measures, roughly, the information contained in the whole system that isn’t reducible to the information contained in its parts—sort of the information that would be lost by splitting the system apart.
The intuition behind this is the unified nature of consciousness, which contains information present to us from very different modalities—sight, sound, touch, smell, thought, emotion, and so on. The question of how those get integrated into a unified conscious experience is known as the ‘combination problem’ in the philosophy of mind, and that’s where IIT takes aim.
Notably, in IIT, whether a system computes is orthogonal to whether it is conscious—typical modern computers are very ‘separable’, i.e. they have a small amount of integrated information; thus, whatever program they run will accordingly not have much, if any, conscious experience associated with it. A brain, on the other hand, highly integrates information (and in one bit of empirical evidence for the theory, this integration seems to be what’s lost in unconscious states, such as under anesthesia). So no program run on a typical computer will yield much in the way of consciousness, while even (computationally speaking) simple architectures may rate highly on information integration.
IIT is not without criticism or detractors (personally, I don’t think it’s helpful regarding the problem of intentionality), but it might at least go some length to ward off the notion that computationalism is almost inevitable, or ‘the only game in town’. Computationalism can be coherently rejected without thereby being forced to admit some dualistic notions, or ‘souls’ or the like.