View Single Post
  #932  
Old 06-09-2019, 01:22 AM
SamuelA is offline
Guest
 
Join Date: Feb 2017
Posts: 3,688
Quote:
Originally Posted by wolfpup View Post
The meaning of the statement that left you so clueless is important but not at all complicated. It's astounding that it needs to be explained, especially to a self-declared genius like yourself. Most of us understand the distinction between syntax and semantics in common parlance. We understand the difference between the orthography of a sentence and its meaning. When you digitize an image, say, the result is a series of numbers, ultimately 1s and 0s. Those are symbols. They have no intrinsic relationship to anything visual, and are not intrinsically distinguishable from any other 1s and 0s in the computer's memory. Semantics in this aspect of computational theory is the property attributed to those symbols by an agent or process that makes them the useful building blocks of an image, such as interpreting them as a matrix of pixel values, or in a different context perhaps as a string of sampled audio values, or something else that has real-world meaning, and instantiating the appropriate semantics to produce useful results. In computers, and in the brain, at least for many cognitive processes, the semantics comes from the way we process symbolic representations, and crucially is not present in the symbols themselves. In cog-sci-speak, according to this theory, which is central to CTM, the pertinent memories are said to be representational (symbolic), not depictive.
Ok, so in our crude "neural nets", we end up with input images, where the bits meant pixel intensities for a color channel, become the intensity of a feature that layer was looking for.

And later on, the features may be processed into abstract "state" representations, for example a a neural network trained to solve a video game may use later layers to represent game state.

I am not sure if this transformation of inputs is what you meant or not. Since ultimately the information from the input came down a specific programmed connect path. Though sensor fusion would involve multiple inputs mapping to a common state-space. The "labels" haven't been lost, however - if my computer stores the bits for an image, it knows it's an image because of where it is located in memory. If the brain stores a map of the environment, it knows it's a map by the physical region where it's stored.

Assuming this is what you meant, how is this relevant to a discussion as to whether or not it's possible at all to emulate a brain or "upload" one while it's still alive. My position has always been that emulation appears to be possible with currently available evidence, and uploading might be possible but whether it is possible or not primarily depends on whether a machine interface could ever be constructed that wouldn't be rejected by biology.

Theorem wise, as the brain is a distributed network, information can traverse from one part of the network to another. Therefore, if you could artificially extend the network, you could in principle trap in a digital system information from such a network. This does not necessarily mean you could upload someone's complete memories and personality, but at a minimum, a sniffer that can copy all visual input or motor outputs is theoretically possible. And this part isn't theory - it's been demonstrated in primates in thousands of separate experiments, albeit with obvious limits due to the crudeness of modern day equipment.

Last edited by SamuelA; 06-09-2019 at 01:25 AM.