View Single Post
Old 01-27-2018, 05:12 PM
SamuelA is offline
Join Date: Feb 2017
Posts: 3,903
Originally Posted by wolfpup View Post
The brain is "all computation" in the trivial sense of electrochemical signaling. No one disputes that the brain is a mechanistic physical device, but that has never been a question in any creditable field of study. In the actually interesting and formal meaning of computation in computer science and cognition, the essence of computation -- the essence of how computers interpret the world -- is that algorithms perform syntactic operations on abstract symbolic representations, and thereby computationally derive the semantics of how they understand the world.

One of the key questions in cognitive science, and specifically in the computational theory of mind (CTM), is the extent to which mental processes are, in fact, computational. There is strong evidence that some are, and also evidence that many are not or that we just don't know how to characterize them that way.
I'm going to give you another shot, here, because you're actually saying something interesting. I don't quite understand how what you are saying matters. Instead of just calling me stupid, let's just say for the sake of argument that I am stupid.

If I've ripped open the guts of some machine and I don't really know how it works, but I find the wires come together into these little parts that I do understand, because all they seem to be doing is adding up and emitting pulses, how does what you are saying prevent me from making another copy of that machine if I tear one down and slavishly duplicate every connection?

Another really fascinating question is let's say I build a machine-learning classifier real quick. But it's one that doesn't start out with tagging. It just looks at camera images with a LIDAR overlay and starts to group contiguous objects together.

Say there are just 2 objects you ever show it, from different angles and distances.

At first the classifier might think there are hundreds of different objects, but let's say some really clever algorithm converges it back down to just 2 that are rotated at different angles.

So at the end of the process, you have this sequence of stages that goes from <input sensors> to [ X X ], where the outputs are [ 0 0 ] (neither present) [ 1 1] (both present) [ 1 0 ] (object A present) [ 0 1 ] (object B present).

I'm really curious how this machine, which we could actually build today, "counts" in your computational theory. Note that we don't have to build it as a python script, we could program separate computer chips to do each stage of processing and interconnect them physically, thus making it resemble a miniature version of the real visual cortex.