Prove it’s not Photoshopped. There’s a pixel or two that are suspect.
But I don’t get it (and I can’t see the image you link to). While participating in this Pit thread, you create an “ignore list” of posters you will be ignoring in other threads?
Sound strange. If you for instance find Darren insulting in this Pit thread, you write down his name so you remember to ignore him, if he for instance post in your GQ Boat thread?
Internet blowhards always are. Don’t you know cited etymologies and usage are unreliable, and uncited personal opinion is always right?
Having the last word isn’t the same as being right.
Sometimes it just means you are using a public forum to convince yourself that you are right, and that can look pretty pathetic.
Ah, but the Internet Scientist Warrior can tell himself that his opponents were struck dumb by his brilliance.
However, we all know who was struck by dumb.
I’m thinking samuelA**** may have been abused and or molested by teachers. Has he mentioned being homeschooled?
Does look pretty pathetic.
This blog post is suitable here.
Singularitarians. I like it.
I think it’s SamuelA’s biography.
I was going to leave this alone but since this tribute to your genius has been revived I feel I must add a few comments.
SamuelA, I am thoroughly sick and tired of your fucking bullshit. You are truly a fucking moron. You asked me for “an MIT paper” contradicting the computational theory of mind. I gave you one. I don’t know why it had to be “an MIT paper” or what you meant by that – Kosslyn is actually at Harvard, but that particular journal is published by MIT Press, so I hope it meets your stellar criteria.
The problem here, SamuelA, is that you didn’t fucking understand it, so you just ignored it. And I can’t help that, nor the fact that you apparently don’t have a clue about what is significant about it (I don’t agree with it, FTR, but it’s an example of the controversy that exists). We already know that you don’t understand most of the stuff you pontificate about, but it’s astounding that someone who claims to have majored in CS doesn’t understand what a computational paradigm is. As Alan Turing might have told you – or indeed, Charles Babbage many years before that – it has nothing whatsoever to do with signaling or the propagation of electrical pulses that you’ve been bloviating about. The broad questions that are being asked are along the lines of: is the brain a finite-state automaton? Can it be emulated by a system that is Turing complete? In pragmatic terms, the questions in cognitive science center around whether cognitive processes consist of syntactic operations on symbolic representations in a manner that can be emulated by a computational system that is Turing complete, or whether perceptual subsystems like the visual cortex are involved, as Kosslyn claims.
The evidence is contradictory, hence the debate. On the pro-CTM side we find that mental image processing is significantly different from perceptual image processing in being influenced by pre-existing knowledge and beliefs, and therefore operates at a higher level of cognitive abstraction. In that paper, Kosslyn tried to show the opposite.
The best summary of it all is perhaps that of the late Jerry Fodor, a pioneer of cognitive science and a strong proponent of CTM despite his acknowledgement of its limitations. Fodor passed away just a few weeks ago, a great loss to everyone who knew him and to the scientific community. He had this to say in the introduction to a book he published seventeen years ago:
There are facts about the mind that [computational theory] accounts for and that we would be utterly at a loss to explain without it; and its central idea – that intentional processes are syntactic operations defined on mental representations – is strikingly elegant. There is, in short, every reason to suppose that the Computational Theory is part of the truth about cognition.
But it hadn’t occurred to me that anyone could suppose that it’s a very large part of the truth; still less that it’s within miles of being the whole story about how the mind works … I certainly don’t suppose that it could comprise more than a fragment of a full and satisfactory cognitive psychology …
– Jerry Fodor, The Mind Doesn’t Work That Way: The Scope and Limits of Computational Psychology, MIT Press, July 2000
But hey, SamuelA, look at the bright side. At least your fucking stupid digression about electricity and signaling let you work in the phrase “node of Ranvier”, so there’s that. Those are mighty big words for someone who thinks a “tenant” is a principle or doctrine in science or philosophy. Trust me, a “tenant” is someone who rents your apartment and pays you rent. Too bad you fucked up here yet again: since it’s named after the French histologist Louis-Antoine Ranvier, the word “Ranvier” in that phrase is by convention capitalized as a proper name. Seems you just can’t win for losing. To avoid this sort of embarrassment in the future, maybe you should stick to using small words and avoid terminology that you’re unfamiliar with.
Then you’ll like this essay. (Then you’ll like this, esé.)
To someone familiar with the field, you just revealed you’re the moron here. The fact that you claim that my correct analysis of discretizing signals - something I do real stuff with daily, I don’t write blogs - is “{blah blah bloviate bloviate as the sphincter opens to full emission capacity}” means you simply lack the background to parse what I wrote.
You obviously are not qualified to comment on the brain at all.
If you were, you would realize that if a system can be copied by truth table, it’s Turing complete. Period. And if another system implements the same table, it cannot be distinguished from the first system in the real world if you don’t know which system was which.
And the evidence is absolutely overwhelming that the signals the brain uses are signals that both the signaling itself and the processing to produce equivalent signals can be emulated by a digital approximation. This has been done in the real world in numerous experiments, including replacement of regions of rat brains with chips.
Since you don’t seem to get it, let me say that what I am focusing on is a synapse by synapse, axon by axon copying of a brain - any brain - sentient or not. If you can copy each individual part and build a low level emulation, all the high level processing must work. In the same way you can’t tell if a person knows Chinese or just has a really great Chinese room/symbol table for all their responses.
It may be philosophically disturbing and it may be difficult to explain the algorithms in terms of current theories, but that’s kind of irrelevant to whether one system can be shown to be functionally equivalent to another.
I also smirk at your obsession with “tenant” vs tenet. Somehow I doubt you ever have taken undergrad, much less grad school neuroscience, and you obviously skipped out on signals and systems. (by the way, that’s a senior level course they also put in graduate catalogs, it isn’t freshman year). Someone who was smart enough to do more than parrot philosophers could focus on the meat of the argument instead of just claiming it’s incorrect by asspull.
For the rest of the audience, let me explain it is simpler details what I mean.
Let’s suppose you have an analog computing circuit that adds together voltages 1 and voltage 2 to produce the output.
Common sense would say if it’s an analog circuit, then if voltage 1 is 1.000000000000001 volts and voltage 2 is 1.000000001 volts, the output would be 2.000000001.
And in fact this system would be infinitely precise. Any teensy change in the inputs will lead to the same teensy change present on the outputs.
You wouldn’t be able to replace this circuit with a digital system. A digital system uses discrete values. Let’s say that for technical reasons, voltages 1 and voltages 2 range from 0 to 3 volts. And you use an 8-bit digital adder as well as 8-bit ADCs and DACs. This means what you did was discretize into 255 buckets this signal. So the digital system would thus be only accurate to 3/255 = 0.0118 volts.
When you add noise into the mix, though, things get interesting. Suppose the adding circuit itself injects 10% of random voltage leakage, peak to peak. This leakage is because nearby circuits (it’s packed very, very tightly) are actually inducing voltages into this circuit inadvertently some of the time. So the circuit is only accurate to ± 0.15 volts, and the digital equivalent is more than 10 times better.
The brain uses both analog voltages and analog timing pulses. Both are, as it turns out in modern experiments, horrifically noisy.
Let’s suppose you wanted to do a lot less than understand consciousness, visual processing, or even what a given functional brain region was doing. All you wanted to do was copy the function of a single synapse. So you build a very teensy computer chip, you paint the electrodes with growth factors, and you, for the sake of argument, have an electrically equivalent connection to the input and output axons for a signal synapse. You can both observe the inputs and outputs, and once you are confident in your model, remove the synapse and replace it.
Say it’s a simple one. There’s 10 input signals, and 1 output. All I/O are all or nothing (1 or 0), but they do happen at exact times. Analog timings, actually…
So again, if you use a digital system, it has a discrete clock. It might run at 1 mhz. Meaning you cannot subdivide time any smaller than 1 microsecond. But… for the exact same argument as above, due to noise, you only need to do somewhere between 2 and 10 times better than the analog system to have a digital replacement.
Similarly, the brain does some really tricky stuff, possibly, at synapses. But whatever tricky stuff is heavily contaminated by noise. So in reality you again don’t need to do that well. Newer research indicates you might need a sequence predictor in your model, for example. But it need not be a particularly high resolution one.
So if you can replace 1 synapse perfectly, in theory, though it is obviously isn’t physically possible to do with a living brain because biology is too fragile and unreliable, you could in theory replace 10% of them. Or 50%. Or 100%. You would have to also duplicate the rules that cause new synapses to form, duplicate the update rules, duplicate other analog signals the brain uses as well. It would be no means be an easy task.
However, this argument is ‘standing on the shoulders’ of many giants who have perfected their signal processing theories over decades. It’s bulletproof. There are no circumstances under which this hypothetical brain copying would not function in the real world. There is nothing the brain could be doing save actual supernatural magic that can’t be copied by a discrete digital system.
That’s much clearer now Sam, thank you.
(Bolding mine)Damn-You were bloviating so beautifully there, then you had to go sabotage yourself subconsciously. This is an example of your own brain telling you you’re full of shit, y’know?
Edited to add: It’s as if you were telling someone how to get somewhere, although you had no idea where that place was, and you ended your instructions with “…then you take a left turn past the house on Pooh’s Corner.”
If I may make a terrible analogy…
Nuclear power is easy, right? Just take some fissionable material, bring enough of it close enough together, and you have power.
You can do the math, and show that it will work. You can do the math, and you can show how much power you can get out of every kilogram of fissionable material.
That is the level at where I feel SamuelA’s understanding of many of the things that he pontificate upon resides. Not that that puts him that far behind anyone else, as that is about the level of understanding that even our best researchers are at for some of the things like nanobots or copying brains digitally.
There is a little bit of engineering involved as well. There are potential roadblocks that may or may not be insurmountable. When nuclear power was first envisioned, they didn’t think about xenon. Xenon almost ruined the whole thing, and while it was a surmountable issue, it remains a significant factor that needs to be monitored to keep your reactor operating correctly.
So, in any of these future technologies, there will be a “xenon”. Something that was completely unexpected based on first principles, (though they did suspect that something may act like xenon as being a neutron poison that builds up as a result of nuclear activity, they did not know it would be xenon, nor how to deal with it until they actually were doing the experiments.) and something that cannot even be considered how to correct for until that flaw is found.
Our understanding of the brain and advanced cellular biology is around where our understanding of nuclear power was in the 20’s. It seems as though there is something there to be exploited for our gain, but the exact road to realizing that, as well as the obstacles in that path are still completely unknown.
These conversations are like a 1920’s nuclear advocate pushing for the creation of fast spectrum molten chloride salt breeder reactor, on the understanding that fission as a process works arguing against the engineers that are actually investigating fission and how to harness it. There may be some areas where he is right, but that is not because he is smarter or better educated than the people building reactors, as they are fully aware of the math that shows that bringing together fissionable materials releases energy. But, by only looking at the math, and ignoring the engineers that actually have practical experience with the subject upon which he pontificates, he comes to misleading conclusions at best about timelines and manners of technological progress, but often about the practicality or feasibility of a technology altogether.
Now, if you watch Isaac Arthur, I suggest you take some time off from his channel. While I find him entertaining and sometimes even educational, he does not really address the engineering or social roadblocks to his visions of the future, and just assumes that they are solved, somehow. Futurists who do not get into the nitty gritty of how exactly the machines that they envision work serve a purpose, but they should not be taken as oracles of our future. (Sorry Isaac, I think I’ve gotten a couple dozen watching your channel that were not previously, so losing this one lost sheep for a bit should be okay.)
For those not following the details of this exciting debate, we’ve just seen SamuelA in action and on full display once again. To knowledgeable practitioners in cognitive science, the role of classic computationalism in mental processes remains a basic central question and locus of research and will remain so for a very long time to come (see especially Fodor’s objections in 7.3 – keeping in mind that Fodor was a proponent of CTM but understood its limitations; he was one of the foundational pioneers of modern cognitive science). But not to SamuelA, who hasn’t figured out what it means yet and perhaps never will, but he knows the answer anyway – it’s trivially obvious because … signals!
Just like it’s trivially obvious that we can all become immortal and live forever because … cells! Even if researchers who actually work in biomedicine have their doubts.
SamuelA doesn’t have doubts. Our greatest scientists and philosophers may struggle with these issues but, as I said earlier, SamuelA struggles with nothing. Sure, maybe he don’t write so good and maybe doesn’t understand basic concepts sometimes, but that just makes the world a simple place that he will be pleased to explain – simplistically and wrongly – to anyone willing to listen. It’s no wonder that every single poster here thinks he’s an annoying moron. Despite some compassionate constructive criticism there’s no sign that this is going to change, so we may as well enjoy ourselves.
Can I ask for you to recheck your assumptions on this?
Bolding added. Where are you getting even the idea of saying that I think the problem would not have unexpected snags?
The only way you can even begin to claim that is I’m saying if we spend a small amount of money (it’s cheaper than long term medical care…) freezing the brains of terminally ill people, the chances are good that we could eventually do something useful with them. And we should plan to freeze them for up to ~300 years (about $30,000 in present day’s money in LN2) because there might in fact be a great many such ‘snags’ that have to be worked out.
All I’m really saying is the risk : reward is worth it for many people. If you gave someone the choice of spending their last few years in a haze in an Alzheimer’s ward, before certain death, or undergoing a surgical procedure that might fail and *might *see them revived in the far future, you would get a lot of takers for the latter. And we should respect that and not consider it “murder” by our archaic understanding based half on religion.
Ok. So instead of focusing on why I feel confident in my answer, I’d like for you to explain in your own words what you think my signals argument is even based on.
What is a signal? What is noise? What is a signal to noise ratio?
If a signal that varies from 0 to 1 volts has ±0.25 volts of random noise, how many bits of information does that signal carry per sampling period?
What is an analog computer?
What is a layer of abstraction?
Can you reduce a problem through abstractions that preserve the nature of a system?
What is a feynman diagram?
I genuinely don’t think you actually understand what these words mean. You want to focus on philosophical musings about how an algorithm can “perceive” or be “aware”. I do not care about that, I don’t pretend to understand it, either.
My hypothesis is we will eventually build machines that do these kinds of things through AI advances, but first we need to create lower level subsystems that optimize for concrete, measurable variables in the real world. Like a classifier than reliably detects what is in front of the machine’s camera. A simulator that reliably estimates the probable actions taken by other agents in a scene (like in an autonomous car). A planner that evaluates paths and finds the path with the least risk.
I think that meta-algorithms built on top of some AI system that analyze and try to optimize the system itself would eventually reach the level of abstraction where “perception and consciousness” is found, but that’s a long time away.
Hypothetically, let’s suppose that you have a cube sent back from 1000 years in the future that actually has a working, conscious (like we think of the term) AI on it. You do not have 1000 years of algorithm advances, you’re not going to figure out how the people of the future did it.
But that cube is made of just a few basic logic gate types, stacked on top of each other to form a compact cube. And you have a few hundred cubes. You sacrifice some and eventually, through enough teardowns, work out the rules each logic gate uses. You build a scanning machine and scan 1 entire cube in it’s entirety.
Do you see how if you could perform an accurate enough scan, this ‘black cube that is sentient’ could be copied, even though you don’t understand how it’s doing it?
Assume the cube uses highly redundant circuitry and self-correcting algorithms, such that 1% scan errors will not affect function.