View Single Post
  #925  
Old 06-08-2019, 08:06 PM
SamuelA is offline
Guest
 
Join Date: Feb 2017
Posts: 3,726
Quote:
Originally Posted by wolfpup View Post
A "symbol" is a token -- an abstract unit of information -- that in itself bears no relationship to the thing it is supposed to represent, just exactly like the bits and bytes in a computer. The relationship to meaningful things -- the semantics -- is established by the logic of syntactical operations that are performed on it, just exactly like the operations of a computer program.
https://boards.straightdope.com/sdmb...7&postcount=88
I would love to understand the chain of logic that would lead anyone to read this as "symbols [are] aware of their own values".

We have hitherto been treated to seeing our hero solve most of the major problems of the world, generally with swarms of self-replicating nanobots -- and always with the word "just" inserted in there somewhere to show how marvelously simple it all is, if only everyone would listen to Sammy and do exactly what he says.
Two comments
a. I agree, I was butchering your nonsense you blather about.

"The relationship to meaningful things -- the semantics -- is established by the logic of syntactical operations that are performed on it, just exactly like the operations of a computer program."

I have no fucking idea what this is supposed to mean. I think you're trying to claim it's an unstoppable obstacle to a computer emulating a human brain, at all, in any situation, but I don't know why. If you don't know how a human brain works at a high level, and I don't know how it works at a high level, and all the world's neuroscientists do not yet have enough empirical evidence to know how it works at a high level, then how can you claim it can't be emulated by a Turing machine?

So you just spout nonsense. I apologize for parroting your nonsense badly. You have made this "a computer can't do what a brain does" argument probably 50+ posts in this thread, plus a bunch in the other threads on the subject, and I freely admit, I don't understand your argument. Other than that it's obviously nonsense.

And the reason I know that is I, many posts ago, established a model for the low level parts of the human brain, which is supported by all present evidence, and established that a Turing machine can emulate such a system. Everyone that argues with you on the subject keeps telling you the same thing.

b. Quote a single post where I talk about self-replicating nanobots as a solution to anything. I frankly don't recall ever suggesting it, even once. This is why I get irked by people bringing it up, as these are an example of something that won't work. (Eric Drexler's ideas are not nanobots as you think, nor are they self replicating in the way you think)

c. The other reason it pisses me off is when I talk about self replicating robots, I never, ever, ever mean "nanobots". I mean fucking machines with "Hitachi" and "Foxconn" stamped on them in vast factories, using more advanced forms of control than present methods (machine learning) so they are a ton smarter and more flexible. As in, our real world future that most of us here will still be alive to see.

Last edited by SamuelA; 06-08-2019 at 08:11 PM.