Closed Thread
 
Thread Tools Display Modes
  #301  
Old 12-28-2017, 05:48 PM
SamuelA is offline
BANNED
 
Join Date: Feb 2017
Posts: 3,903
Quote:
Originally Posted by wolfpup View Post
Whether cognitive processes are computational is a very controversial area,
No they aren't. Can directly cite some MIT papers on this subject but I'm going to ask you point blank to see if you're actually just bullshitting yourself.

What signals do axons carry from one place to another? Do they vary in amplitude? Do they vary in time? Is there such a thing as "infinite resolution" in a noisy analog system or not?

If a system has finite resolution, does that mean or not mean a digital equivalent exists?

If you have a digital system that produces the same truth table as another digital system, can you say those systems are functionally the same?

If a digital system has higher discrete resolution than an analog system, and produces the same outputs as the analog system to within the effective resolution allowed by noise, can you say those systems are functionally the same?

You should be able to answer these questions. Once you answer them all, you will either (a) concede that I'm right, the brain's a computational system that it is possible to emulate to the same effective resolution as the real system. (b) have some novel insight that you can share with me as to why (a) isn't true.

It actually turns out that this is a well established area of science. Those experts...some of whom have high credentials...who claim otherwise are just wrong, in the same way all the scientists arguing against relativity were just wrong. You'll see. (or refuse to do the work like Tripler)

Last edited by SamuelA; 12-28-2017 at 05:51 PM.
  #302  
Old 12-28-2017, 05:50 PM
Darren Garrison's Avatar
Darren Garrison is offline
Guest
 
Join Date: Oct 2016
Posts: 12,035
Could you provide a few links to posts where I am "following you around"? (Protip: posts where I am simply in the same thread as you and replying to someone else don't count.)
  #303  
Old 12-28-2017, 05:52 PM
Czarcasm's Avatar
Czarcasm is offline
Champion Chili Chef
Charter Member
 
Join Date: Apr 1999
Location: Portland, OR
Posts: 63,162
Quote:
Originally Posted by SamuelA View Post
No they aren't. Can directly cite some MIT papers on this subject but I'm going to ask you point blank to see if you're actually just bullshitting yourself.

What signals do axons carry from one place to another? Do they vary in amplitude? Do they vary in time? Is there such a thing as "infinite resolution" in a noisy analog system or not?

If a system has finite resolution, does that mean or not mean a digital equivalent exists?

If you have a digital system that produces the same truth table as another digital system, can you say those systems are functionally the same?

If a digital system has higher discrete resolution than an analog system, and produces the same outputs as the analog system to within the effective resolution allowed by noise, can you say those systems are functionally the same?

You should be able to answer these questions. Once you answer them all, you will either (a) concede that I'm right, the brain's a computational system that it is possible to emulate to the same effective resolution as the real system. (b) have some novel insight that you can share with me as to why (a) isn't true.

It actually turns out that this is a well established area of science. Those experts...some of whom have high credentials...who claim otherwise are just wrong, in the same way all the scientists arguing against relativity were just wrong.
He actually thinks he is here to assign homework?
  #304  
Old 12-28-2017, 05:53 PM
SamuelA is offline
BANNED
 
Join Date: Feb 2017
Posts: 3,903
Quote:
Originally Posted by Czarcasm View Post
He actually thinks he is here to assign homework?
It's a persuasive argument, actually. Each question I asked has a single accepted answer by the community. Shouldn't take more than a few minutes to figure out the answer if wolfpup even passed signals and systems or the equivalent course.
  #305  
Old 12-28-2017, 06:14 PM
Morgenstern is offline
Guest
 
Join Date: Jun 2007
Location: Southern California
Posts: 11,866
Sam. This isn't working. Trust me, it's not.
  #306  
Old 12-28-2017, 06:22 PM
wolfpup's Avatar
wolfpup is offline
Guest
 
Join Date: Jan 2014
Posts: 11,231
Quote:
Originally Posted by SamuelA View Post
No they aren't. Can directly cite some MIT papers on this subject but I'm going to ask you point blank to see if you're actually just bullshitting yourself.
https://www.mitpressjournals.org/doi...n.1993.5.3.263

Stephen Kosslyn here argues that mental image processing involves the visual cortex, which is at odds with syntactic-representational model. Kosslyn is one of a number of opponents of CTM. Many others are dipshit philosophers but Kosslyn does real empirical work. Many of his conclusions are wrong, but it pains me greatly to have you on my side for the wrong reasons. Please side with Kosslyn and others like him and discredit them with your supportive idiotic bloviations. Please don't be on my side.
Quote:
Originally Posted by SamuelA View Post
If you have a digital system that produces the same truth table as another digital system, can you say those systems are functionally the same?
Seems like circular reasoning since you're making the unwarranted assumption that brain functions are digital. Regardless, a functionalist view is a philosophical precept that tells us nothing at all about how the brain actually works. Perhaps you believe that a Boeing 747 is functionally a sparrow, but it isn't. FTR, I believe the brain can eventually be fully emulated with artificial digital systems, but that's an opinion and not a fact. And if/when we do, we still won't fully understand how the brain works, although you apparently already do, in keeping with knowing everything, Dunning-Kruger style.
Quote:
Originally Posted by SamuelA View Post
It's a persuasive argument, actually. Each question I asked has a single accepted answer by the community. Shouldn't take more than a few minutes to figure out the answer if wolfpup even passed signals and systems or the equivalent course.
Of course! Signals and Systems 101 and all of cognitive science is solved! How very SamuelA! What a total fucking dipshit moron!

Last edited by wolfpup; 12-28-2017 at 06:24 PM.
  #307  
Old 12-28-2017, 06:39 PM
SamuelA is offline
BANNED
 
Join Date: Feb 2017
Posts: 3,903
Quote:
Originally Posted by wolfpup View Post
https://www.mitpressjournals.org/doi...n.1993.5.3.263

Stephen Kosslyn here argues that mental image processing involves the visual cortex, which is at odds with syntactic-representational model. Kosslyn is one of a number of opponents of CTM. Many others are dipshit philosophers but Kosslyn does real empirical work. Many of his conclusions are wrong, but it pains me greatly to have you on my side for the wrong reasons. Please side with Kosslyn and others like him and discredit them with your supportive idiotic bloviations. Please don't be on my side.

Seems like circular reasoning since you're making the unwarranted assumption that brain functions are digital.


Of course! Signals and Systems 101 and all of cognitive science is solved! How very SamuelA! What a total fucking dipshit moron!
I don't see answers to these questions. I see a false claim that all of cognitive science is solved but not answers that prove that you enough know enough to have a meaningful dialogue. I am not claiming to have solved it, if you had bothered to look up the answers, you'd realize there's not actually much wiggle room left for the idea that the brain is not a computational system.

An impulse with pulse edges and timing in a domain with noise can be discretized digitally with a numerical time resolution that need only be better than than SNR of the brain. (signal to noise ratio)

Perform this simple mental experiment. What if you could cut every axon and replace it with a system that digitizes the signal at the first node of ranvier and reinjects the signal, after a delay that is a discrete number of ticks from a digital system, at the last node of ranvier before the next synapse.

I am telling you there is firm, near absolute certainty mathematical proof that this experiment would produce the same outcome, so long as the clock resolution of this digital system is higher than the SNR of the system it emulates.

Similarly, if you think about it, each time a synapse receives an impulse, a certain amount of membrane charge is added or subtracted. This is an analog voltage but it has finite resolution. So you could in fact secretly replace (if you could do so, this is a thought experiment) each synapse with a digital counter, and that counter's numerical resolution need be no better than the SNR of that analog voltage.

Again, we're damn certain this is going to work.

Now, yes, there's other stuff neuroscience keeps finding. Other cells seem to be able to cleanup neurotransmitters and may be part of computation. There are concentration gradients of various hormones and modulation molecules.

There's long term changes to each synapse.

But you can trivially see, if you actually break the problem down, that you could in fact build a system that emulates a brain and responds to short term impulses in the exact same way as the original brain. It will work. Hormones and long term changes and long distance concentration gradients are all slow - you can in fact build a system that gives the same responses in the short term.

But again, just call me an idiot, whatever. I am well aware it's more complex, but also know that all analog systems can be replaced with a digital equivalent, you just have to discretize to above the SNR. This is a very well known principle in some fields...guess not yours. There are theories that maybe the brain is storing data in fragile q-bits or something, but these theories are probably wrong.

Assuming no quantum magic, the evidence is actually conclusive that you can emulate any and all systems the brain uses with digital equivalents. You can think of those digital equivalents as a very large truth table (since at a certain level they are), thus the brain is a computational system.

Last edited by SamuelA; 12-28-2017 at 06:42 PM.
  #308  
Old 12-28-2017, 06:42 PM
Sunny Daze's Avatar
Sunny Daze is offline
Member
 
Join Date: Feb 2014
Location: Bay Area Urban Sprawl
Posts: 13,082
Quote:
Originally Posted by SamuelA View Post
ignore list is now 3. The straw here is that you are being lazy. Instead of claiming my understanding is superficial, pick an important point and justify why your understanding is more in depth.
Woot! I'm ignored.

On the off chance you still don't know how that works, I'd like to point out that this is the Pit. You seem very confused about how things work here. If you want to have reasoned discussion here, please go to GD. Finally, it's your dumbass idea, you're the one that has to prove it. If you'd like to try again in GD, let's have a go, you eugenics-minded freak.
  #309  
Old 12-28-2017, 06:45 PM
SamuelA is offline
BANNED
 
Join Date: Feb 2017
Posts: 3,903
Quote:
Originally Posted by Sunny Daze View Post
Woot! I'm ignored.

On the off chance you still don't know how that works, I'd like to point out that this is the Pit. You seem very confused about how things work here. If you want to have reasoned discussion here, please go to GD. Finally, it's your dumbass idea, you're the one that has to prove it. If you'd like to try again in GD, let's have a go, you eugenics-minded freak.
Eugenics? Where did that come from? I don't remember supporting eugenics anywhere but maybe I forgot. Up to 1700 posts here.

I mean eugenics is certainly correct as an idea. We practice it all the time with animals and selective breeding. It obviously works. It's just unethical to do to people and also if you were going to do it, you'd need to go by inner traits, not things that don't depend on genes like a person's religion. We could totally do eugenics today using DNA tests, sterilizing those who aren't good, and it would work, though awfully slowly...

And yes, if I didn't mention Eugenics before this post, then yeah, you're just pitting yourself.

Last edited by SamuelA; 12-28-2017 at 06:48 PM.
  #310  
Old 12-28-2017, 06:47 PM
Darren Garrison's Avatar
Darren Garrison is offline
Guest
 
Join Date: Oct 2016
Posts: 12,035
Quote:
Originally Posted by Sunny Daze View Post
On the off chance you still don't know how that works, I'd like to point out that this is the Pit. You seem very confused about how things work here.
Shhhhhh! You're pitting yourself!
  #311  
Old 12-28-2017, 06:48 PM
running coach's Avatar
running coach is online now
Arms of Steel, Leg of Jello
Charter Member
 
Join Date: Nov 2000
Location: Riding my handcycle
Posts: 37,462
Quote:
Originally Posted by Sunny Daze View Post
Woot! I'm ignored.

On the off chance you still don't know how that works, I'd like to point out that this is the Pit. You seem very confused about how things work here. If you want to have reasoned discussion here, please go to GD. Finally, it's your dumbass idea, you're the one that has to prove it. If you'd like to try again in GD, let's have a go, you eugenics-minded freak.
Actually, his "ignore list" requires him to look at a post and check his list for matches.
He has no idea how to use the board provided function.
  #312  
Old 12-28-2017, 06:50 PM
SamuelA is offline
BANNED
 
Join Date: Feb 2017
Posts: 3,903
Quote:
Originally Posted by running coach View Post
Actually, his "ignore list" requires him to look at a post and check his list for matches.
He has no idea how to use the board provided function.
I think we already had a lecture on evaluation of hypotheses.

a. Hypothesis 1 : I found the button for ignore and used it, I'm just manually choosing to read your posts

b. Hypothesis 2 : the "ignore list" is written down somewhere.

Now, based on all the posts I have made, you should be intelligent enough to see which hypothesis has more evidence. I mention finding the button in one of the posts above.

This is why you're on the ignore list, because you're too stupid, lol. You just pitted yourself...again.

Guess it doesn't take much brainpower to be a running coach, eh?

Last edited by SamuelA; 12-28-2017 at 06:52 PM.
  #313  
Old 12-28-2017, 06:53 PM
running coach's Avatar
running coach is online now
Arms of Steel, Leg of Jello
Charter Member
 
Join Date: Nov 2000
Location: Riding my handcycle
Posts: 37,462
Quote:
Originally Posted by SamuelA View Post
I think we already had a lecture on evaluation of hypotheses.

a. Hypothesis 1 : I found the button for ignore and used it, I'm just manually choosing to read your posts

b. Hypothesis 2 : the "ignore list" is written down somewhere.

Now, based on all the posts I have made, you should be intelligent enough to see which hypothesis has more evidence.

This is why you're on the ignore list, because you're too stupid, lol. You just pitted yourself...again.

Guess it doesn't take much brainpower to be a running coach, eh?
I go with b" since the whole point of an ignore list is to not read posts that upset you.
At least I don't pretend to be something I'm not.

Last edited by running coach; 12-28-2017 at 06:53 PM.
  #314  
Old 12-28-2017, 06:57 PM
Morgenstern is offline
Guest
 
Join Date: Jun 2007
Location: Southern California
Posts: 11,866
Quote:
Originally Posted by SamuelA View Post
I think we already had a lecture on evaluation of hypotheses.

...?
Sammy. You're ignoring wrong.
  #315  
Old 12-28-2017, 07:07 PM
Sunny Daze's Avatar
Sunny Daze is offline
Member
 
Join Date: Feb 2014
Location: Bay Area Urban Sprawl
Posts: 13,082
What? I'm on his ignore list, except I'm not really. If he chooses to read one of my posts, I'm pitting myself? Wowser.

I'm guessing we may have a lack of common understanding on what 'ignore' means.
  #316  
Old 12-28-2017, 07:24 PM
running coach's Avatar
running coach is online now
Arms of Steel, Leg of Jello
Charter Member
 
Join Date: Nov 2000
Location: Riding my handcycle
Posts: 37,462
Quote:
Originally Posted by Sunny Daze View Post
What? I'm on his ignore list, except I'm not really. If he chooses to read one of my posts, I'm pitting myself? Wowser.

I'm guessing we may have a lack of common understanding on what 'ignore' means.
I posted the dictionary definition in post 189. Apparently, he also is more of an expert in English than the dictionary.
  #317  
Old 12-28-2017, 08:12 PM
raventhief's Avatar
raventhief is offline
Member
 
Join Date: Apr 2010
Posts: 5,083
There was another poster who used a personal ignore list in the same way, wasn't there?
  #318  
Old 12-28-2017, 08:32 PM
SamuelA is offline
BANNED
 
Join Date: Feb 2017
Posts: 3,903
Quote:
Originally Posted by raventhief View Post
There was another poster who used a personal ignore list in the same way, wasn't there?
Dude. They really are on the list. I'm just choosing to view their posts in this thread as I still hope they will say something interesting.
  #319  
Old 12-28-2017, 08:55 PM
Wakinyan's Avatar
Wakinyan is offline
Guest
 
Join Date: Aug 2005
Location: Scandinavia
Posts: 1,705
Why do you keep an ignore list if you are not ignoring the posters on the list?
  #320  
Old 12-28-2017, 09:02 PM
SamuelA is offline
BANNED
 
Join Date: Feb 2017
Posts: 3,903
Quote:
Originally Posted by Wakinyan View Post
Why do you keep an ignore list if you are not ignoring the posters on the list?
Read the post one above yours.
  #321  
Old 12-28-2017, 09:04 PM
running coach's Avatar
running coach is online now
Arms of Steel, Leg of Jello
Charter Member
 
Join Date: Nov 2000
Location: Riding my handcycle
Posts: 37,462
Quote:
Originally Posted by SamuelA View Post
Dude. They really are on the list. I'm just choosing to view their posts in this thread as I still hope they will say something interesting.
Prove it's not Photoshopped. There's a pixel or two that are suspect.
  #322  
Old 12-28-2017, 09:14 PM
Wakinyan's Avatar
Wakinyan is offline
Guest
 
Join Date: Aug 2005
Location: Scandinavia
Posts: 1,705
Quote:
Originally Posted by SamuelA View Post
Read the post one above yours.
But I don't get it (and I can't see the image you link to). While participating in this Pit thread, you create an "ignore list" of posters you will be ignoring in other threads?

Sound strange. If you for instance find Darren insulting in this Pit thread, you write down his name so you remember to ignore him, if he for instance post in your GQ Boat thread?
  #323  
Old 12-28-2017, 10:07 PM
Kamino Neko's Avatar
Kamino Neko is offline
Guest
 
Join Date: Apr 1999
Location: Alternate 230
Posts: 15,474
Quote:
Originally Posted by running coach View Post
I posted the dictionary definition in post 189. Apparently, he also is more of an expert in English than the dictionary.
Internet blowhards always are. Don't you know cited etymologies and usage are unreliable, and uncited personal opinion is always right?
  #324  
Old 12-28-2017, 11:54 PM
outlierrn is offline
Member
 
Join Date: Nov 2005
Location: republic of california
Posts: 5,744
Quote:
Originally Posted by SamuelA View Post
Captive how? All you have to do is leave this thread, leaving me with the last word. You're pitting yourself. You obviously are so argumentative that you're posting here to justify your own idiocy. You must have these same qualities.
Having the last word isn't the same as being right.
__________________
Just another outlying data point on the bell curve of life
  #325  
Old 12-29-2017, 12:07 AM
Czarcasm's Avatar
Czarcasm is offline
Champion Chili Chef
Charter Member
 
Join Date: Apr 1999
Location: Portland, OR
Posts: 63,162
Quote:
Originally Posted by outlierrn View Post
Having the last word isn't the same as being right.
Sometimes it just means you are using a public forum to convince yourself that you are right, and that can look pretty pathetic.
  #326  
Old 12-29-2017, 12:09 AM
running coach's Avatar
running coach is online now
Arms of Steel, Leg of Jello
Charter Member
 
Join Date: Nov 2000
Location: Riding my handcycle
Posts: 37,462
Quote:
Originally Posted by outlierrn View Post
Having the last word isn't the same as being right.
Ah, but the Internet Scientist Warrior can tell himself that his opponents were struck dumb by his brilliance.
However, we all know who was struck by dumb.
  #327  
Old 12-29-2017, 12:12 AM
Claude Remains is offline
Guest
 
Join Date: Oct 2004
Posts: 1,369
I'm thinking samuelA may have been abused and or molested by teachers. Has he mentioned being homeschooled?
  #328  
Old 12-29-2017, 04:53 AM
Ramira's Avatar
Ramira is offline
Member
 
Join Date: Jan 2003
Posts: 3,744
Quote:
Originally Posted by Czarcasm View Post
Sometimes it just means you are using a public forum to convince yourself that you are right, and that can look pretty pathetic.
Does look pretty pathetic.
  #329  
Old 12-30-2017, 01:05 AM
Darren Garrison's Avatar
Darren Garrison is offline
Guest
 
Join Date: Oct 2016
Posts: 12,035
This blog post is suitable here.
  #330  
Old 12-30-2017, 01:46 AM
Sunny Daze's Avatar
Sunny Daze is offline
Member
 
Join Date: Feb 2014
Location: Bay Area Urban Sprawl
Posts: 13,082
Singularitarians. I like it.
  #331  
Old 12-30-2017, 02:04 AM
wolfpup's Avatar
wolfpup is offline
Guest
 
Join Date: Jan 2014
Posts: 11,231
Quote:
Originally Posted by Darren Garrison View Post
This blog post is suitable here.
I think it's SamuelA's biography.
Quote:
Originally Posted by SamuelA View Post
I don't see answers to these questions. I see a false claim that all of cognitive science is solved but not answers that prove that you enough know enough to have a meaningful dialogue. I am not claiming to have solved it, if you had bothered to look up the answers, you'd realize there's not actually much wiggle room left for the idea that the brain is not a computational system.

An impulse with pulse edges and timing in a domain with noise can be discretized digitally with a numerical time resolution that need only be ... {blah blah bloviate bloviate as the sphincter opens to full emission capacity}
I was going to leave this alone but since this tribute to your genius has been revived I feel I must add a few comments.

SamuelA, I am thoroughly sick and tired of your fucking bullshit. You are truly a fucking moron. You asked me for "an MIT paper" contradicting the computational theory of mind. I gave you one. I don't know why it had to be "an MIT paper" or what you meant by that -- Kosslyn is actually at Harvard, but that particular journal is published by MIT Press, so I hope it meets your stellar criteria.

The problem here, SamuelA, is that you didn't fucking understand it, so you just ignored it. And I can't help that, nor the fact that you apparently don't have a clue about what is significant about it (I don't agree with it, FTR, but it's an example of the controversy that exists). We already know that you don't understand most of the stuff you pontificate about, but it's astounding that someone who claims to have majored in CS doesn't understand what a computational paradigm is. As Alan Turing might have told you -- or indeed, Charles Babbage many years before that -- it has nothing whatsoever to do with signaling or the propagation of electrical pulses that you've been bloviating about. The broad questions that are being asked are along the lines of: is the brain a finite-state automaton? Can it be emulated by a system that is Turing complete? In pragmatic terms, the questions in cognitive science center around whether cognitive processes consist of syntactic operations on symbolic representations in a manner that can be emulated by a computational system that is Turing complete, or whether perceptual subsystems like the visual cortex are involved, as Kosslyn claims.

The evidence is contradictory, hence the debate. On the pro-CTM side we find that mental image processing is significantly different from perceptual image processing in being influenced by pre-existing knowledge and beliefs, and therefore operates at a higher level of cognitive abstraction. In that paper, Kosslyn tried to show the opposite.

The best summary of it all is perhaps that of the late Jerry Fodor, a pioneer of cognitive science and a strong proponent of CTM despite his acknowledgement of its limitations. Fodor passed away just a few weeks ago, a great loss to everyone who knew him and to the scientific community. He had this to say in the introduction to a book he published seventeen years ago:
There are facts about the mind that [computational theory] accounts for and that we would be utterly at a loss to explain without it; and its central idea -- that intentional processes are syntactic operations defined on mental representations -- is strikingly elegant. There is, in short, every reason to suppose that the Computational Theory is part of the truth about cognition.

But it hadn't occurred to me that anyone could suppose that it's a very large part of the truth; still less that it's within miles of being the whole story about how the mind works ... I certainly don't suppose that it could comprise more than a fragment of a full and satisfactory cognitive psychology ...
-- Jerry Fodor, The Mind Doesn't Work That Way: The Scope and Limits of Computational Psychology, MIT Press, July 2000
But hey, SamuelA, look at the bright side. At least your fucking stupid digression about electricity and signaling let you work in the phrase "node of Ranvier", so there's that. Those are mighty big words for someone who thinks a "tenant" is a principle or doctrine in science or philosophy. Trust me, a "tenant" is someone who rents your apartment and pays you rent. Too bad you fucked up here yet again: since it's named after the French histologist Louis-Antoine Ranvier, the word "Ranvier" in that phrase is by convention capitalized as a proper name. Seems you just can't win for losing. To avoid this sort of embarrassment in the future, maybe you should stick to using small words and avoid terminology that you're unfamiliar with.
  #332  
Old 12-30-2017, 02:16 AM
Darren Garrison's Avatar
Darren Garrison is offline
Guest
 
Join Date: Oct 2016
Posts: 12,035
Quote:
Originally Posted by Sunny Daze View Post
Singularitarians. I like it.
Then you'll like this essay. (Then you'll like this, esť.)
  #333  
Old 12-30-2017, 05:47 AM
SamuelA is offline
BANNED
 
Join Date: Feb 2017
Posts: 3,903
Quote:
Originally Posted by wolfpup View Post
I think it's SamuelA's biography.


I was going to leave this alone but since this tribute to your genius has been revived I feel I must add a few comments.

SamuelA, I am thoroughly sick and tired of your fucking bullshit. You are truly a fucking moron. You asked me for "an MIT paper" contradicting the computational theory of mind. I gave you one. I don't know why it had to be "an MIT paper" or what you meant by that -- Kosslyn is actually at Harvard, but that particular journal is published by MIT Press, so I hope it meets your stellar criteria.

The problem here, SamuelA, is that you didn't fucking understand it, so you just ignored it. And I can't help that, nor the fact that you apparently don't have a clue about what is significant about it (I don't agree with it, FTR, but it's an example of the controversy that exists). We already know that you don't understand most of the stuff you pontificate about, but it's astounding that someone who claims to have majored in CS doesn't understand what a computational paradigm is. As Alan Turing might have told you -- or indeed, Charles Babbage many years before that -- it has nothing whatsoever to do with signaling or the propagation of electrical pulses that you've been bloviating about. The broad questions that are being asked are along the lines of: is the brain a finite-state automaton? Can it be emulated by a system that is Turing complete? In pragmatic terms, the questions in cognitive science center around whether cognitive processes consist of syntactic operations on symbolic representations in a manner that can be emulated by a computational system that is Turing complete, or whether perceptual subsystems like the visual cortex are involved, as Kosslyn claims.

The evidence is contradictory, hence the debate. On the pro-CTM side we find that mental image processing is significantly different from perceptual image processing in being influenced by pre-existing knowledge and beliefs, and therefore operates at a higher level of cognitive abstraction. In that paper, Kosslyn tried to show the opposite.

The best summary of it all is perhaps that of the late Jerry Fodor, a pioneer of cognitive science and a strong proponent of CTM despite his acknowledgement of its limitations. Fodor passed away just a few weeks ago, a great loss to everyone who knew him and to the scientific community. He had this to say in the introduction to a book he published seventeen years ago:
There are facts about the mind that [computational theory] accounts for and that we would be utterly at a loss to explain without it; and its central idea -- that intentional processes are syntactic operations defined on mental representations -- is strikingly elegant. There is, in short, every reason to suppose that the Computational Theory is part of the truth about cognition.

But it hadn't occurred to me that anyone could suppose that it's a very large part of the truth; still less that it's within miles of being the whole story about how the mind works ... I certainly don't suppose that it could comprise more than a fragment of a full and satisfactory cognitive psychology ...
-- Jerry Fodor, The Mind Doesn't Work That Way: The Scope and Limits of Computational Psychology, MIT Press, July 2000
But hey, SamuelA, look at the bright side. At least your fucking stupid digression about electricity and signaling let you work in the phrase "node of Ranvier", so there's that. Those are mighty big words for someone who thinks a "tenant" is a principle or doctrine in science or philosophy. Trust me, a "tenant" is someone who rents your apartment and pays you rent. Too bad you fucked up here yet again: since it's named after the French histologist Louis-Antoine Ranvier, the word "Ranvier" in that phrase is by convention capitalized as a proper name. Seems you just can't win for losing. To avoid this sort of embarrassment in the future, maybe you should stick to using small words and avoid terminology that you're unfamiliar with.
To someone familiar with the field, you just revealed you're the moron here. The fact that you claim that my correct analysis of discretizing signals - something I do real stuff with daily, I don't write blogs - is "{blah blah bloviate bloviate as the sphincter opens to full emission capacity}" means you simply lack the background to parse what I wrote.

You obviously are not qualified to comment on the brain at all.

If you were, you would realize that if a system can be copied by truth table, it's Turing complete. Period. And if another system implements the same table, it cannot be distinguished from the first system in the real world if you don't know which system was which.

And the evidence is absolutely overwhelming that the signals the brain uses are signals that both the signaling itself and the processing to produce equivalent signals can be emulated by a digital approximation. This has been done in the real world in numerous experiments, including replacement of regions of rat brains with chips.

Since you don't seem to get it, let me say that what I am focusing on is a synapse by synapse, axon by axon copying of a brain - any brain - sentient or not. If you can copy each individual part and build a low level emulation, all the high level processing must work. In the same way you can't tell if a person knows Chinese or just has a really great Chinese room/symbol table for all their responses.

It may be philosophically disturbing and it may be difficult to explain the algorithms in terms of current theories, but that's kind of irrelevant to whether one system can be shown to be functionally equivalent to another.

I also smirk at your obsession with "tenant" vs tenet. Somehow I doubt you ever have taken undergrad, much less grad school neuroscience, and you obviously skipped out on signals and systems. (by the way, that's a senior level course they also put in graduate catalogs, it isn't freshman year). Someone who was smart enough to do more than parrot philosophers could focus on the meat of the argument instead of just claiming it's incorrect by asspull.

Last edited by SamuelA; 12-30-2017 at 05:51 AM.
  #334  
Old 12-30-2017, 06:20 AM
SamuelA is offline
BANNED
 
Join Date: Feb 2017
Posts: 3,903
For the rest of the audience, let me explain it is simpler details what I mean.

Let's suppose you have an analog computing circuit that adds together voltages 1 and voltage 2 to produce the output.

Common sense would say if it's an analog circuit, then if voltage 1 is 1.000000000000001 volts and voltage 2 is 1.000000001 volts, the output would be 2.000000001.

And in fact this system would be infinitely precise. Any teensy change in the inputs will lead to the same teensy change present on the outputs.

You wouldn't be able to replace this circuit with a digital system. A digital system uses discrete values. Let's say that for technical reasons, voltages 1 and voltages 2 range from 0 to 3 volts. And you use an 8-bit digital adder as well as 8-bit ADCs and DACs. This means what you did was discretize into 255 buckets this signal. So the digital system would thus be only accurate to 3/255 = 0.0118 volts.

When you add noise into the mix, though, things get interesting. Suppose the adding circuit itself injects 10% of random voltage leakage, peak to peak. This leakage is because nearby circuits (it's packed very, very tightly) are actually inducing voltages into this circuit inadvertently some of the time. So the circuit is only accurate to +- 0.15 volts, and the digital equivalent is more than 10 times better.

The brain uses both analog voltages and analog timing pulses. Both are, as it turns out in modern experiments, horrifically noisy.

Let's suppose you wanted to do a lot less than understand consciousness, visual processing, or even what a given functional brain region was doing. All you wanted to do was copy the function of a single synapse. So you build a very teensy computer chip, you paint the electrodes with growth factors, and you, for the sake of argument, have an electrically equivalent connection to the input and output axons for a signal synapse. You can both observe the inputs and outputs, and once you are confident in your model, remove the synapse and replace it.

Say it's a simple one. There's 10 input signals, and 1 output. All I/O are all or nothing (1 or 0), but they do happen at exact times. Analog timings, actually...

So again, if you use a digital system, it has a discrete clock. It might run at 1 mhz. Meaning you cannot subdivide time any smaller than 1 microsecond. But... for the exact same argument as above, due to noise, you only need to do somewhere between 2 and 10 times better than the analog system to have a digital replacement.

Similarly, the brain does some really tricky stuff, possibly, at synapses. But whatever tricky stuff is heavily contaminated by noise. So in reality you again don't need to do that well. Newer research indicates you might need a sequence predictor in your model, for example. But it need not be a particularly high resolution one.

So if you can replace 1 synapse perfectly, in theory, though it is obviously isn't physically possible to do with a living brain because biology is too fragile and unreliable, you could in theory replace 10% of them. Or 50%. Or 100%. You would have to also duplicate the rules that cause new synapses to form, duplicate the update rules, duplicate other analog signals the brain uses as well. It would be no means be an easy task.

However, this argument is 'standing on the shoulders' of many giants who have perfected their signal processing theories over decades. It's bulletproof. There are no circumstances under which this hypothetical brain copying would not function in the real world. There is nothing the brain could be doing save actual supernatural magic that can't be copied by a discrete digital system.

Last edited by SamuelA; 12-30-2017 at 06:21 AM.
  #335  
Old 12-30-2017, 10:18 AM
Morgenstern is offline
Guest
 
Join Date: Jun 2007
Location: Southern California
Posts: 11,866
That's much clearer now Sam, thank you.
  #336  
Old 12-30-2017, 11:20 AM
Czarcasm's Avatar
Czarcasm is offline
Champion Chili Chef
Charter Member
 
Join Date: Apr 1999
Location: Portland, OR
Posts: 63,162
Quote:
Originally Posted by SamuelA View Post
There is nothing the brain could be doing save actual supernatural magic that can't be copied by a discrete digital system.
(Bolding mine)Damn-You were bloviating so beautifully there, then you had to go sabotage yourself subconsciously. This is an example of your own brain telling you you're full of shit, y'know?

Edited to add: It's as if you were telling someone how to get somewhere, although you had no idea where that place was, and you ended your instructions with "...then you take a left turn past the house on Pooh's Corner."

Last edited by Czarcasm; 12-30-2017 at 11:23 AM.
  #337  
Old 12-30-2017, 01:41 PM
k9bfriender is offline
Guest
 
Join Date: Jul 2013
Posts: 11,564
If I may make a terrible analogy...

Nuclear power is easy, right? Just take some fissionable material, bring enough of it close enough together, and you have power.

You can do the math, and show that it will work. You can do the math, and you can show how much power you can get out of every kilogram of fissionable material.

That is the level at where I feel SamuelA's understanding of many of the things that he pontificate upon resides. Not that that puts him that far behind anyone else, as that is about the level of understanding that even our best researchers are at for some of the things like nanobots or copying brains digitally.

There is a little bit of engineering involved as well. There are potential roadblocks that may or may not be insurmountable. When nuclear power was first envisioned, they didn't think about xenon. Xenon almost ruined the whole thing, and while it was a surmountable issue, it remains a significant factor that needs to be monitored to keep your reactor operating correctly.

So, in any of these future technologies, there will be a "xenon". Something that was completely unexpected based on first principles, (though they did suspect that something may act like xenon as being a neutron poison that builds up as a result of nuclear activity, they did not know it would be xenon, nor how to deal with it until they actually were doing the experiments.) and something that cannot even be considered how to correct for until that flaw is found.

Our understanding of the brain and advanced cellular biology is around where our understanding of nuclear power was in the 20's. It seems as though there is something there to be exploited for our gain, but the exact road to realizing that, as well as the obstacles in that path are still completely unknown.

These conversations are like a 1920's nuclear advocate pushing for the creation of fast spectrum molten chloride salt breeder reactor, on the understanding that fission as a process works arguing against the engineers that are actually investigating fission and how to harness it. There may be some areas where he is right, but that is not because he is smarter or better educated than the people building reactors, as they are fully aware of the math that shows that bringing together fissionable materials releases energy. But, by only looking at the math, and ignoring the engineers that actually have practical experience with the subject upon which he pontificates, he comes to misleading conclusions at best about timelines and manners of technological progress, but often about the practicality or feasibility of a technology altogether.

Now, if you watch Isaac Arthur, I suggest you take some time off from his channel. While I find him entertaining and sometimes even educational, he does not really address the engineering or social roadblocks to his visions of the future, and just assumes that they are solved, somehow. Futurists who do not get into the nitty gritty of how exactly the machines that they envision work serve a purpose, but they should not be taken as oracles of our future. (Sorry Isaac, I think I've gotten a couple dozen watching your channel that were not previously, so losing this one lost sheep for a bit should be okay.)
  #338  
Old 12-30-2017, 02:15 PM
wolfpup's Avatar
wolfpup is offline
Guest
 
Join Date: Jan 2014
Posts: 11,231
For those not following the details of this exciting debate, we've just seen SamuelA in action and on full display once again. To knowledgeable practitioners in cognitive science, the role of classic computationalism in mental processes remains a basic central question and locus of research and will remain so for a very long time to come (see especially Fodor's objections in 7.3 -- keeping in mind that Fodor was a proponent of CTM but understood its limitations; he was one of the foundational pioneers of modern cognitive science). But not to SamuelA, who hasn't figured out what it means yet and perhaps never will, but he knows the answer anyway -- it's trivially obvious because ... signals!

Just like it's trivially obvious that we can all become immortal and live forever because ... cells! Even if researchers who actually work in biomedicine have their doubts.

SamuelA doesn't have doubts. Our greatest scientists and philosophers may struggle with these issues but, as I said earlier, SamuelA struggles with nothing. Sure, maybe he don't write so good and maybe doesn't understand basic concepts sometimes, but that just makes the world a simple place that he will be pleased to explain -- simplistically and wrongly -- to anyone willing to listen. It's no wonder that every single poster here thinks he's an annoying moron. Despite some compassionate constructive criticism there's no sign that this is going to change, so we may as well enjoy ourselves.
  #339  
Old 12-30-2017, 02:31 PM
SamuelA is offline
BANNED
 
Join Date: Feb 2017
Posts: 3,903
Quote:
Originally Posted by k9bfriender View Post
There is a little bit of engineering involved as well. There are potential roadblocks that may or may not be insurmountable. When nuclear power was first envisioned, they didn't think about xenon. Xenon almost ruined the whole thing, and while it was a surmountable issue, it remains a significant factor that needs to be monitored to keep your reactor operating correctly.

So, in any of these future technologies, there will be a "xenon". Something that was completely unexpected based on first principles, (though they did suspect that something may act like xenon as being a neutron poison that builds up as a result of nuclear activity, they did not know it would be xenon, nor how to deal with it until they actually were doing the experiments.) and something that cannot even be considered how to correct for until that flaw is found.

Our understanding of the brain and advanced cellular biology is around where our understanding of nuclear power was in the 20's. It seems as though there is something there to be exploited for our gain, but the exact road to realizing that, as well as the obstacles in that path are still completely unknown.
Can I ask for you to recheck your assumptions on this?

Quote:
Originally Posted by SamuelA View Post
So if you can replace 1 synapse perfectly, in theory, though it is obviously isn't physically possible to do with a living brain because biology is too fragile and unreliable, you could in theory replace 10% of them. Or 50%. Or 100%. You would have to also duplicate the rules that cause new synapses to form, duplicate the update rules, duplicate other analog signals the brain uses as well. It would be no means be an easy task.
Bolding added. Where are you getting even the idea of saying that I think the problem would not have unexpected snags?

The only way you can even begin to claim that is I'm saying if we spend a small amount of money (it's cheaper than long term medical care...) freezing the brains of terminally ill people, the chances are good that we could eventually do something useful with them. And we should plan to freeze them for up to ~300 years (about $30,000 in present day's money in LN2) because there might in fact be a great many such 'snags' that have to be worked out.

All I'm really saying is the risk : reward is worth it for many people. If you gave someone the choice of spending their last few years in a haze in an Alzheimer's ward, before certain death, or undergoing a surgical procedure that might fail and might see them revived in the far future, you would get a lot of takers for the latter. And we should respect that and not consider it "murder" by our archaic understanding based half on religion.
  #340  
Old 12-30-2017, 02:44 PM
SamuelA is offline
BANNED
 
Join Date: Feb 2017
Posts: 3,903
Quote:
Originally Posted by wolfpup View Post
But not to SamuelA, who hasn't figured out what it means yet and perhaps never will, but he knows the answer anyway -- it's trivially obvious because ... signals!
Ok. So instead of focusing on why I feel confident in my answer, I'd like for you to explain in your own words what you think my signals argument is even based on.

What is a signal? What is noise? What is a signal to noise ratio?

If a signal that varies from 0 to 1 volts has +-0.25 volts of random noise, how many bits of information does that signal carry per sampling period?

What is an analog computer?

What is a layer of abstraction?

Can you reduce a problem through abstractions that preserve the nature of a system?

What is a feynman diagram?

I genuinely don't think you actually understand what these words mean. You want to focus on philosophical musings about how an algorithm can "perceive" or be "aware". I do not care about that, I don't pretend to understand it, either.

My hypothesis is we will eventually build machines that do these kinds of things through AI advances, but first we need to create lower level subsystems that optimize for concrete, measurable variables in the real world. Like a classifier than reliably detects what is in front of the machine's camera. A simulator that reliably estimates the probable actions taken by other agents in a scene (like in an autonomous car). A planner that evaluates paths and finds the path with the least risk.

I think that meta-algorithms built on top of some AI system that analyze and try to optimize the system itself would eventually reach the level of abstraction where "perception and consciousness" is found, but that's a long time away.

Hypothetically, let's suppose that you have a cube sent back from 1000 years in the future that actually has a working, conscious (like we think of the term) AI on it. You do not have 1000 years of algorithm advances, you're not going to figure out how the people of the future did it.

But that cube is made of just a few basic logic gate types, stacked on top of each other to form a compact cube. And you have a few hundred cubes. You sacrifice some and eventually, through enough teardowns, work out the rules each logic gate uses. You build a scanning machine and scan 1 entire cube in it's entirety.

Do you see how if you could perform an accurate enough scan, this 'black cube that is sentient' could be copied, even though you don't understand how it's doing it?

Assume the cube uses highly redundant circuitry and self-correcting algorithms, such that 1% scan errors will not affect function.
  #341  
Old 12-30-2017, 02:53 PM
k9bfriender is offline
Guest
 
Join Date: Jul 2013
Posts: 11,564
Quote:
Originally Posted by SamuelA View Post
Can I ask for you to recheck your assumptions on this?
Nah. Give me something specific, and maybe.
Quote:

Bolding added. Where are you getting even the idea of saying that I think the problem would not have unexpected snags?
One throwaway line of "It would be no means be an easy task." implies that it is just a matter of working hard enough, of wanting it badly enough, to overcome this task that you admit is not easy. Cleaning out my basement after a recent flood was by no means an easy task either.

You do not acknowledge that there may be roadblocks that stop us in our track entirely upon a particular avenue of future technology. You don't even acknowledge that there are roadblocks that may require serious advances in seemingly unrelated fields.

You just say it won't be easy. Well, we knew that already, if it were easy, they would have done it already. The question is, is it possible, is it feasible, and is it practical? We don't know the answers to any of those questions yet, and will not for quite some time. You don't have those answers either.
Quote:
The only way you can even begin to claim that is I'm saying if we spend a small amount of money (it's cheaper than long term medical care...) freezing the brains of terminally ill people, the chances are good that we could eventually do something useful with them. And we should plan to freeze them for up to ~300 years (about $30,000 in present day's money in LN2) because there might in fact be a great many such 'snags' that have to be worked out.
I could say that about any of your futurism claims, from nanobots to redirecting asteroids. It's not just a matter of money, and it's not just a matter of research. Part of it is whether or not the universe actually works that way.
Quote:
All I'm really saying is the risk : reward is worth it for many people. If you gave someone the choice of spending their last few years in a haze in an Alzheimer's ward, before certain death, or undergoing a surgical procedure that might fail and might see them revived in the far future, you would get a lot of takers for the latter. And we should respect that and not consider it "murder" by our archaic understanding based half on religion.
I actually agree with doctor assisted suicide for terminal patients with low quality of life, so I have no problem with someone making that decision for themselves. But that is how I would see it, as doctor assisted suicide, not as life extension. I don't know how many takers you would get, but I would not be among them.
  #342  
Old 12-30-2017, 03:32 PM
Czarcasm's Avatar
Czarcasm is offline
Champion Chili Chef
Charter Member
 
Join Date: Apr 1999
Location: Portland, OR
Posts: 63,162
I wish there was a smiley that meant "...but they laughed at Bozo, too."
  #343  
Old 12-30-2017, 03:46 PM
SamuelA is offline
BANNED
 
Join Date: Feb 2017
Posts: 3,903
Quote:
Originally Posted by k9bfriender View Post
I could say that about any of your futurism claims, from nanobots to redirecting asteroids. It's not just a matter of money, and it's not just a matter of research. Part of it is whether or not the universe actually works that way.
Ok. So what possible laws of physics could even exist that could allow our cells to work but prevent nanobots from working. Could allow nukes to detonate and rocket engines to work but prevent us from redirecting asteroids. Could allow a collection of machines running in saltwater reading an actually fairly short, for it's complexity, program encoded in base-4 to generate a sentient mind, but not let us copy that mind.

Do you see how implausible your claims are? I am not making a specific timeline claim, other than "probably under 300 years". I don't know when this tech will work out. We thought there might be flying cars in the 1990s. There weren't, but the idea hasn't been totally abandoned and there's actually a real possibility of some sort of automated aerial taxi service with all the advances we have today.

You're making a mental error in the opposite direction of what you claim I am.
  #344  
Old 12-30-2017, 04:39 PM
wolfpup's Avatar
wolfpup is offline
Guest
 
Join Date: Jan 2014
Posts: 11,231
Quote:
Originally Posted by SamuelA View Post
You want to focus on philosophical musings about how an algorithm can "perceive" or be "aware". I do not care about that, I don't pretend to understand it, either.
It has nothing to do with "philosophical ruminations". You're totally not getting it and continuing to spew amusing bullshit. Fodor, who you sneeringly dismiss as just a "philosopher" that I'm "parroting" because you don't like (and don't understand) what he's saying, was one of the foundational theorists of modern cognitive science -- not merely a "philosopher" but a proponent of some of the most important concrete theories about how the mind works.

The operative principle is that some mental processes appear to be computational -- that is, syntactic operations on symbolic representations called propositional representations -- while many others are not. How we process mental images is a classic case where there the evidence is at least somewhat contradictory. There is very, very much about how the mind works that we currently don't understand. You, OTOH, are trying to argue that not only are all mental processes computational, but the brain itself is a computer, because ... signals! It therefore follows in your simple brain that, obviously, a digital computer can emulate the human mind. Many serious theorists doubt this, but even if it were true, it doesn't actually tell us how the brain works.

My own belief stems from the functionalist view of cognition -- that mental states are defined by what they do rather than how they are instantiated, and so I believe that a digital computer with suitable software will eventually be indistinguishable from the human brain and greatly exceed its capabilities in most respects. But it will achieve these goals in vastly different ways. The brain is not a computer and this hypothetical computer will not be a brain, even though both can think, just like -- as I said in a different thread on the same subject -- a Boeing 747 is not a sparrow, even though both can fly. Your argument about "signals" and noise etc. is an argument from ignorance, apparently stemming from a few things that you may know a little about but revealing many things that you apparently know nothing at all about.
  #345  
Old 12-30-2017, 04:54 PM
SamuelA is offline
BANNED
 
Join Date: Feb 2017
Posts: 3,903
Quote:
Originally Posted by wolfpup View Post
The operative principle is that some mental processes appear to be computational -- that is, syntactic operations on symbolic representations called propositional representations -- while many others are not. How we process mental images is a classic case where there the evidence is at least somewhat contradictory. There is very, very much about how the mind works that we currently don't understand. You, OTOH, are trying to argue that not only are all mental processes computational, but the brain itself is a computer, because ... signals!
Ok, maybe we can finally get some convergence.

I am saying that from observable sub-processes in the brain (those signals), we can show that the physical matter is performing something that is similar enough to computation we understand that we can mimic it.

And we know if we have a black box we don't understand that emits signals, and we respond with signals close enough to the correct signals that [B]physical reality[/B] provides no reliable way of distinguishing them from correct signals (because the ones we send are accurate to within the threshold allowed by noise), we have replaced the black box with a box we do understand.

And if you can do that, you can get brain-equivalent outputs from a machine that isn't a brain, making what the brain does functionally the same as computation.

So yeah the signals argument is crucially important, and it's also obviously correct.

You would have to discover a method of processing the a synapse does that produces output pulses that you can't reliably emulate with a digital machine to disprove it.

As for the higher level stuff - again, if you built a computer system using neural networks that was even 1% as complex as the brain, with a self modifying architecture, with all kinds of crazy deep connections between layers - you'd probably also notice strange outputs that are hard to correlate to any model of computation you understand.

Even trivial neural networks can easily become a black box to humans.

Anyways, instead of just repeating over and over that "signals" isn't a valid argument, think about it. Mentally isolate off a single synapse. What if you were emulating that synapse badly? How bad do you have to be that the receiver on the other end can tell you're "different" than before? If the environment had no noise, any deviation could be detected. But what if all the signals you send and receive are garbled anyway?

And if you can subdivide the brain into trillions of tiny black boxes around each axon, and mentally swap those boxes with equivalent boxes, why would you not get the same outcome when you look at how the visual cortex processes things? What principle of physical reality allows the outcome to be different?

Last edited by SamuelA; 12-30-2017 at 04:58 PM.
  #346  
Old 12-30-2017, 05:31 PM
k9bfriender is offline
Guest
 
Join Date: Jul 2013
Posts: 11,564
Quote:
Originally Posted by SamuelA View Post
Ok. So what possible laws of physics could even exist that could allow our cells to work but prevent nanobots from working. Could allow nukes to detonate and rocket engines to work but prevent us from redirecting asteroids. Could allow a collection of machines running in saltwater reading an actually fairly short, for it's complexity, program encoded in base-4 to generate a sentient mind, but not let us copy that mind.

Do you see how implausible your claims are? I am not making a specific timeline claim, other than "probably under 300 years". I don't know when this tech will work out. We thought there might be flying cars in the 1990s. There weren't, but the idea hasn't been totally abandoned and there's actually a real possibility of some sort of automated aerial taxi service with all the advances we have today.

You're making a mental error in the opposite direction of what you claim I am.
Tell you what, you come up with, from first principles and the knowledge that they would have had in the 30's and early 40's that xenon would be produced by a nuclear reaction of U-235, would act as a neutron poison, and would have a half life of a few hours. Then show, with the same knowledge, that there would not be a build up of other poisons that have much longer half lives that would interfere with a nuclear reaction to the extent of making a reactor essentially impossible to run.

If I had said to someone of your certainty that there may be problems in building a nuclear reactor, would you ask me what laws of physics could exist that could interfere with getting a sustained chain reaction?

It's not the laws of physics, it is how they end up working together to make more complicated things that serve our needs that is the difficulty.

Now, as far as nano-bots, I see any future in that as being modified cells, not tiny robots. Cellular machinery doesn't work like macroscopic machinery works at all, it's not servos and actuators, it is hydrophobic and hydrophilic surfaces interacting (far more complicated than that, but that's a start). That you feel that you can use these properties to make little robots that will do your bidding is not a straightforward proposition. It may be possible, but there is no real roadmap to that, nor any real research that indicates that it is certainly possible, it's more of a maybe.
  #347  
Old 12-30-2017, 05:34 PM
SamuelA is offline
BANNED
 
Join Date: Feb 2017
Posts: 3,903
You know, I didn't actually understand the nature of signals after I took signals and systems, anyways. It was all just a bunch of busywork math and manually computing transforms by hand.

This video actually provides a tremendous amount of insight : https://youtu.be/cIQ9IXSUzuM

Once you understand it, you'll realize that a neural impulse is not a square impulse. The edges are blurred. It looks exactly like a frequency limited signal, when they show what a square wave looks like in this video. Which means the same techniques do apply, including sampling at a finite rate.

Or, in neuroscience terms, I'm saying that since we can put a Chinese Room around each individual synapse, and because physical reality doesn't let us determine if a given synapse actually "knows Chinese" or not, and we can definitely do this, then we have pretty definitive proof that what the brain is doing is functionally computation.
  #348  
Old 12-30-2017, 05:39 PM
SamuelA is offline
BANNED
 
Join Date: Feb 2017
Posts: 3,903
Quote:
Originally Posted by k9bfriender View Post
Tell you what, you come up with, from first principles and the knowledge that they would have had in the 30's and early 40's that xenon would be produced by a nuclear reaction of U-235, would act as a neutron poison, and would have a half life of a few hours. Then show, with the same knowledge, that there would not be a build up of other poisons that have much longer half lives that would interfere with a nuclear reaction to the extent of making a reactor essentially impossible to run.

If I had said to someone of your certainty that there may be problems in building a nuclear reactor, would you ask me what laws of physics could exist that could interfere with getting a sustained chain reaction?

It's not the laws of physics, it is how they end up working together to make more complicated things that serve our needs that is the difficulty.

Now, as far as nano-bots, I see any future in that as being modified cells, not tiny robots. Cellular machinery doesn't work like macroscopic machinery works at all, it's not servos and actuators, it is hydrophobic and hydrophilic surfaces interacting (far more complicated than that, but that's a start). That you feel that you can use these properties to make little robots that will do your bidding is not a straightforward proposition. It may be possible, but there is no real roadmap to that, nor any real research that indicates that it is certainly possible, it's more of a maybe.
With chain reactions, no matter how nasty the neutron poisoning happened to be, you could have always increased reactivity to overcome it. Even if you end up with a reactor that is basically just a lump of U-235 gas in a centrifuge at high pressure. The chain reaction is so powerful that you can probably find a way to make it work.

As for nanobots, you're ignoring that we have made prototypes for motors and gears and checked the math on more complex little structures that we can't make yet but they mechanically work.

If you look at nature you see countless sloppy little mechanisms that all definitely work. So you'd have to really feel over-skeptical to think you can't make your own, better mechanisms of the same class of thing that do your bidding.

And I see you just ignored the redirecting the asteroids one because there's no traction there. We already checked the math on that, that works unless the asteroid is extraordinarily large or you have very little time to react.
  #349  
Old 12-30-2017, 05:59 PM
k9bfriender is offline
Guest
 
Join Date: Jul 2013
Posts: 11,564
Quote:
Originally Posted by SamuelA View Post
With chain reactions, no matter how nasty the neutron poisoning happened to be, you could have always increased reactivity to overcome it. Even if you end up with a reactor that is basically just a lump of U-235 gas in a centrifuge at high pressure. The chain reaction is so powerful that you can probably find a way to make it work.

As for nanobots, you're ignoring that we have made prototypes for motors and gears and checked the math on more complex little structures that we can't make yet but they mechanically work.

If you look at nature you see countless sloppy little mechanisms that all definitely work. So you'd have to really feel over-skeptical to think you can't make your own, better mechanisms of the same class of thing that do your bidding.

And I see you just ignored the redirecting the asteroids one because there's no traction there. We already checked the math on that, that works unless the asteroid is extraordinarily large or you have very little time to react.
You know, I really am on your side philosophically, I just feel that you are a bit too adamant about things that you don't know, because no one knows them.

I am not here to try to argue with you point for point on all your claims. I really don't have time for that. I think that you are actually a fairly intelligent and optimistic young man, and that you do have some fun ideas that are worth exploring.

But, you try to come across as THE expert in every field, and yo are not. There are plenty of people on this board that actually are experts in the fields in which you pontificate, and you could learn alot from them. Instead, you insult them, and try to claim that you are right, and they are wrong, even though you know little more than first principles of the subject.

Like I said on nuclear, it seems really easy, throw together some radioactive material, and there you go. But, as one becomes an expert in that field, they realize that there are many little things that make it a less straightforward proposition. That is the part that you refuse to accept, and it is incredibly frustrating.

Try something new, try entering into a conversation with the assumption that you know less than the person with whom you are speaking. Just try it once. I bet that you will find that you learned something new, something you never would have learned if you start the conversation by declaring that you are the expert, and that anyone who disagrees with you is wrong.

Just try it once. You may be surprised.
  #350  
Old 12-30-2017, 06:00 PM
wolfpup's Avatar
wolfpup is offline
Guest
 
Join Date: Jan 2014
Posts: 11,231
Quote:
Originally Posted by SamuelA View Post
Ok, maybe we can finally get some convergence.

I am saying that from observable sub-processes in the brain (those signals), we can show that the physical matter is performing something that is similar enough to computation we understand that we can mimic it.
Ah -- "something that is similar enough to computation"! IOW, you're right as you always are, provided we redefine "computation" to mean some arbitrary thing that you just thought of, instead of what it actually means in computer science and cognitive science.

I'll remind you that this particular discussion goes back to here, where you claimed that the brain is just "thousands of physical computational circuits" and then doubled down on the stupid by claiming that all cognitive processes are computational, and apparently everybody knows that -- at least, everyone as brilliant as you fancy yourself. And then you started handing out homework assignments.

Turns out, you were wrong. As I repeatedly showed you, there is considerable controversy about whether even some of our mental processes are truly computational, and virtually no serious cognitive scientist believes that computational processes explain all of cognition. I happen to be a proponent of CTM, but it's based on its power to explain empirical cognitive phenomena and not on absurdly irrelevant arguments about "signaling" properties, which is like "here's something I just learned about in school, so I'm going to bloviate about it even though it has absolutely nothing to do with the discussion".
Closed Thread

Bookmarks

Thread Tools
Display Modes

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is Off
HTML code is Off

Forum Jump


All times are GMT -5. The time now is 09:14 PM.

Powered by vBulletin® Version 3.8.7
Copyright ©2000 - 2019, vBulletin Solutions, Inc.

Send questions for Cecil Adams to: cecil@straightdope.com

Send comments about this website to: webmaster@straightdope.com

Terms of Use / Privacy Policy

Advertise on the Straight Dope!
(Your direct line to thousands of the smartest, hippest people on the planet, plus a few total dipsticks.)

Copyright © 2019 STM Reader, LLC.

 
Copyright © 2017