Technology doesn't work that way - SamuelA's Pit Thread

Nah. Give me something specific, and maybe.

One throwaway line of “It would be no means be an easy task.” implies that it is just a matter of working hard enough, of wanting it badly enough, to overcome this task that you admit is not easy. Cleaning out my basement after a recent flood was by no means an easy task either.

You do not acknowledge that there may be roadblocks that stop us in our track entirely upon a particular avenue of future technology. You don’t even acknowledge that there are roadblocks that may require serious advances in seemingly unrelated fields.

You just say it won’t be easy. Well, we knew that already, if it were easy, they would have done it already. The question is, is it possible, is it feasible, and is it practical? We don’t know the answers to any of those questions yet, and will not for quite some time. You don’t have those answers either.

I could say that about any of your futurism claims, from nanobots to redirecting asteroids. It’s not just a matter of money, and it’s not just a matter of research. Part of it is whether or not the universe actually works that way.

I actually agree with doctor assisted suicide for terminal patients with low quality of life, so I have no problem with someone making that decision for themselves. But that is how I would see it, as doctor assisted suicide, not as life extension. I don’t know how many takers you would get, but I would not be among them.

I wish there was a smiley that meant “…but they laughed at Bozo, too.”

Ok. So what possible laws of physics could even exist that could allow our cells to work but prevent nanobots from working. Could allow nukes to detonate and rocket engines to work but prevent us from redirecting asteroids. Could allow a collection of machines running in saltwater reading an actually fairly short, for it’s complexity, program encoded in base-4 to generate a sentient mind, but not let us copy that mind.

Do you see how *implausible *your claims are? I am not making a specific timeline claim, other than “probably under 300 years”. I don’t know when this tech will work out. We thought there might be flying cars in the 1990s. There weren’t, but the idea hasn’t been totally abandoned and there’s actually a real possibility of some sort of automated aerial taxi service with all the advances we have today.

You’re making a mental error in the opposite direction of what you claim I am.

It has nothing to do with “philosophical ruminations”. You’re totally not getting it and continuing to spew amusing bullshit. Fodor, who you sneeringly dismiss as just a “philosopher” that I’m “parroting” because you don’t like (and don’t understand) what he’s saying, was one of the foundational theorists of modern cognitive science – not merely a “philosopher” but a proponent of some of the most important concrete theories about how the mind works.

The operative principle is that some mental processes appear to be computational – that is, syntactic operations on symbolic representations called propositional representations – while many others are not. How we process mental images is a classic case where there the evidence is at least somewhat contradictory. There is very, very much about how the mind works that we currently don’t understand. You, OTOH, are trying to argue that not only are all mental processes computational, but the brain itself is a computer, because … signals! It therefore follows in your simple brain that, obviously, a digital computer can emulate the human mind. Many serious theorists doubt this, but even if it were true, it doesn’t actually tell us how the brain works.

My own belief stems from the functionalist view of cognition – that mental states are defined by what they do rather than how they are instantiated, and so I believe that a digital computer with suitable software will eventually be indistinguishable from the human brain and greatly exceed its capabilities in most respects. But it will achieve these goals in vastly different ways. The brain is not a computer and this hypothetical computer will not be a brain, even though both can think, just like – as I said in a different thread on the same subject – a Boeing 747 is not a sparrow, even though both can fly. Your argument about “signals” and noise etc. is an argument from ignorance, apparently stemming from a few things that you may know a little about but revealing many things that you apparently know nothing at all about.

Ok, maybe we can finally get some convergence.

I am saying that from observable sub-processes in the brain (those signals), we can show that the physical matter is performing something that is similar enough to computation we understand that we can mimic it.

And we know if we have a black box we don’t understand that emits signals, and we respond with signals close enough to the correct signals that [B]physical reality provides no reliable way of distinguishing them from correct signals (because the ones we send are accurate to within the threshold allowed by noise), we have replaced the black box with a box we *do *understand.

And if you can do that, you can get brain-equivalent outputs from a machine that isn’t a brain, making what the brain does functionally the same as computation.

So yeah the signals argument is crucially important, and it’s also obviously correct.

You would have to discover a method of processing the a synapse does that produces output pulses that you can’t reliably emulate with a digital machine to disprove it.

As for the higher level stuff - again, if you built a computer system using neural networks that was even 1% as complex as the brain, with a self modifying architecture, with all kinds of crazy deep connections between layers - you’d probably also notice strange outputs that are hard to correlate to any model of computation you understand.

Even trivial neural networks can easily become a black box to humans.

Anyways, instead of just repeating over and over that “signals” isn’t a valid argument, think about it. Mentally isolate off a single synapse. What if you were emulating that synapse badly? How bad do you have to be that the receiver on the other end can tell you’re “different” than before? If the environment had no noise, any deviation could be detected. But what if all the signals you send and receive are garbled anyway?

And if you can subdivide the brain into trillions of tiny black boxes around each axon, and mentally swap those boxes with equivalent boxes, why would you *not *get the same outcome when you look at how the visual cortex processes things? What principle of physical reality allows the outcome to be different?

Tell you what, you come up with, from first principles and the knowledge that they would have had in the 30’s and early 40’s that xenon would be produced by a nuclear reaction of U-235, would act as a neutron poison, and would have a half life of a few hours. Then show, with the same knowledge, that there would not be a build up of other poisons that have much longer half lives that would interfere with a nuclear reaction to the extent of making a reactor essentially impossible to run.

If I had said to someone of your certainty that there may be problems in building a nuclear reactor, would you ask me what laws of physics could exist that could interfere with getting a sustained chain reaction?

It’s not the laws of physics, it is how they end up working together to make more complicated things that serve our needs that is the difficulty.

Now, as far as nano-bots, I see any future in that as being modified cells, not tiny robots. Cellular machinery doesn’t work like macroscopic machinery works at all, it’s not servos and actuators, it is hydrophobic and hydrophilic surfaces interacting (far more complicated than that, but that’s a start). That you feel that you can use these properties to make little robots that will do your bidding is not a straightforward proposition. It may be possible, but there is no real roadmap to that, nor any real research that indicates that it is certainly possible, it’s more of a maybe.

You know, I didn’t actually understand the nature of signals after I took signals and systems, anyways. It was all just a bunch of busywork math and manually computing transforms by hand.

This video actually provides a tremendous amount of insight : D/A and A/D | Digital Show and Tell (Monty Montgomery @ xiph.org) - YouTube

Once you understand it, you’ll realize that a neural impulse is not a square impulse. The edges are blurred. It looks exactly like a frequency limited signal, when they show what a square wave looks like in this video. Which means the same techniques do apply, including sampling at a finite rate.

Or, in neuroscience terms, I’m saying that since we can put a Chinese Room around each individual synapse, and because physical reality doesn’t let us determine if a given synapse actually “knows Chinese” or not, and we can definitely do this, then we have pretty definitive proof that what the brain is doing is functionally computation.

With chain reactions, no matter how nasty the neutron poisoning happened to be, you could have always increased reactivity to overcome it. Even if you end up with a reactor that is basically just a lump of U-235 gas in a centrifuge at high pressure. The chain reaction is so powerful that you can probably find a way to make it work.

As for nanobots, you’re ignoring that we have made prototypes for motors and gears and checked the math on more complex little structures that we can’t make yet but they mechanically work.

If you look at nature you see countless sloppy little mechanisms that all definitely work. So you’d have to really feel over-skeptical to think you can’t make your own, better mechanisms of the same class of thing that do your bidding.

And I see you just ignored the redirecting the asteroids one because there’s no traction there. We already checked the math on that, that works unless the asteroid is extraordinarily large or you have very little time to react.

You know, I really am on your side philosophically, I just feel that you are a bit too adamant about things that you don’t know, because no one knows them.

I am not here to try to argue with you point for point on all your claims. I really don’t have time for that. I think that you are actually a fairly intelligent and optimistic young man, and that you do have some fun ideas that are worth exploring.

But, you try to come across as THE expert in every field, and yo are not. There are plenty of people on this board that actually are experts in the fields in which you pontificate, and you could learn alot from them. Instead, you insult them, and try to claim that you are right, and they are wrong, even though you know little more than first principles of the subject.

Like I said on nuclear, it seems really easy, throw together some radioactive material, and there you go. But, as one becomes an expert in that field, they realize that there are many little things that make it a less straightforward proposition. That is the part that you refuse to accept, and it is incredibly frustrating.

Try something new, try entering into a conversation with the assumption that you know less than the person with whom you are speaking. Just try it once. I bet that you will find that you learned something new, something you never would have learned if you start the conversation by declaring that you are the expert, and that anyone who disagrees with you is wrong.

Just try it once. You may be surprised.

Ah – “something that is similar enough to computation”! IOW, you’re right as you always are, provided we redefine “computation” to mean some arbitrary thing that you just thought of, instead of what it actually means in computer science and cognitive science. :smiley:

I’ll remind you that this particular discussion goes back to here, where you claimed that the brain is just “thousands of physical computational circuits” and then doubled down on the stupid by claiming that all cognitive processes are computational, and apparently everybody knows that – at least, everyone as brilliant as you fancy yourself. And then you started handing out homework assignments.

Turns out, you were wrong. As I repeatedly showed you, there is considerable controversy about whether even some of our mental processes are truly computational, and virtually no serious cognitive scientist believes that computational processes explain all of cognition. I happen to be a proponent of CTM, but it’s based on its power to explain empirical cognitive phenomena and not on absurdly irrelevant arguments about “signaling” properties, which is like “here’s something I just learned about in school, so I’m going to bloviate about it even though it has absolutely nothing to do with the discussion”.

By computation I mean a system that discretizes the analog signal to digital at a finite sampling rate and resolution. It then takes that binary number and consults a truth table for the output. (note that the output is determined by both input signals + internal variable(s). Synapses have at least one internal variable, the amount of electric charge. )

It then takes the output of the truth table and converts it back to an analog signal, again at finite resolution and with a finite frequency.

I am saying this hypothetical system could emulate a synapse, and it would not be possible to tell the difference between this system and the “real” brain, so long as the errors this system has are smaller than the errors added by random noise in a “real” synapse.

I could actually build a system that does this, whether it be from a truth table or more realistically I’d represent it as simple equations in computer source code.

Don’t challenge a computer engineer on their understanding of computation :slight_smile:

I’m about to pit the OP for making me aware of your existence…

Well, I regret being aware of his existence, so I suppose fair is fair.

From this moment forward I will refer to him as Dry Pasta.

Well that was useful! :smiley:

I don’t argue with experts. I try to learn from them. But you might notice, SamuelA,*** that I’m arguing with you***. Why is that, do you think? I’ll tell you why, SamuelA. It’s because it annoys me that you have no idea what you’re talking about. I might accuse you of bait-and-switch on the definition of “computation” if I thought you were smart enough to do that just to win an Internet argument, but I don’t think you are. I think you genuinely don’t understand the topic under discussion.

At the risk of repeating myself, you claimed over here that all cognitive processes are computational, and apparently everybody knows that. I showed you repeatedly why you are flat-out wrong. You are wrong by definition of what the term means in that context.

Now let me ask you some simple questions, SamuelA, because you are so brilliant that we eagerly anticipate the answers:

  1. What do you think the academic discipline of cognitive science is about? What is it trying to discover? And,

  2. What does the “computational theory of mind” mean in the context of cognitive science? Because remember, SamuelA, this is what you claimed was a done deal, and according to you everyone knows that there’s no controversy about this whatsoever. Well, at least you don’t. Everyone else actually does. This, SamuelA, is one of many reasons that you’re regarded as a pompous fucking moron.

Let me help you out a little, SamuelA, before you try to answer those questions, and I’ll keep it simple. The idea of cognitive processes being “computational” borrows from computer science the notion that some mental processes may be Turing-like in that they can be characterized as syntactic operations on symbols. If this is true, then where that applies there must be corresponding algorithms that create semantic interpretations of these symbolic representations, in just the way that computer algorithms do. This has been referred to as “the language of thought”. These are important concepts in CTM, but virtually no one believes that they are a complete explanation of cognition, as many mental processes don’t appear to fit syntactic-representational descriptions, like when the visual cortex appears to be involved in processing mental images.

You, however, lecture us about signals, voltages, and SNR. It’s like trying to explain the methods IBM used to make Watson a Jeopardy champion by saying “it’s all done with transistors”. It’s as if I wanted to attend a course so I could learn how to apply AI to implement a facial recognition system, or build a system for understanding spoken natural language (except that cognitive processes are infinitely more complex) and I ended up in a class listening to some pinhead explain how a transistor works. Why stop there? Really, it’s all done with electricity! Anyone who just understands the basic principles can see how it works! This is classic SamuelA reductionism. You’re not just in the wrong class, SamuelA, you’re lost in the wrong fucking building, but you’re either too clueless or too full of yourself to realize it.

Which also appears to be the case with many other topics you’ve chosen to pontificate on. And that’s why I’m done wasting my time discussing this with you. It will in any case leave you more free time to populate the galaxy with autonomous self-replicating factories, the basic principle of which is of course trivial.

Ok. So let’s flip the problem in reverse. What if you were talking to an advanced version of Watson, and you can’t quite figure out how the machine’s responses meet your criteria of “syntactic operations on symbols”.

But you have lots of Watsons, and you tear some down. You find at a low level there is nothing but really small transistors that you can model. No near absolute zero q-bits, nothing that isn’t ultimately a Turing machine.

So even if you disagree totally and just want to insult me, if you discover a machine is nothing but transistors that form a Turing machine, do you see how you might conclude it must be model-able and emulateable with a Turing machine even if you can’t quite see how you get to the machine’s advanced behaviors?

That’s what I am saying and have been saying. I made a detailed technical argument as to why at a low level, it’s nothing but “transistors” (well, truth tables), therefore, while I cannot explain how this works out at a high level to give the interesting things we see, I must conclude based on the overwhelming majority of the evidence that the brain

(1) can be modeled by a very large Turing machine in a way that is indistinguishable from the real thing
(2) can probably be scanned and copied as a result <this doesn’t even depend on point 1, actually…>
(3) must be using a really interesting algorithm for (1) to be true and what you are talking about to also be true.

My “overconfidence” you mistakenly thing is incompetence is simply I’m not willing to concede something that is blatantly untrue just to “keep the peace”. Maybe I’d shut up if we were in person and you were more physically imposing than me, but that’s not how the internet works.

If new, peer reviewed and replicated, evidence came in breathlessly discovering the brain communicates using additional physical channels that are not simply concentration gradients in a few spots and mostly just low frequency electrical pulses, I would change my view instantly.

OK, just one more time, violating my own promise to myself to stop wasting time with the inestimable SamuelA. But this is getting funnier as he digs himself in ever deeper, and it’s a lazy New Year’s day and I’m bored, so what the hell …

I’m not physically imposing and IRL I would have no desire to punch you in the face, SamuelA. IRL, I’d probably just walk away chuckling to myself. But on the with pitting:

I find it fascinating that you not only don’t have a clue about CTM in cognitive science, as you have by now been amply demonstrated – in the process of which you snidely dismiss one of the great pioneers in that field (which is just so typically SamuelA) – but, remarkably, you really don’t seem to understand what a Turing machine is, either. Though I don’t know why I’m surprised to see this from the same source that absurdly tries to inform us that the brain consists of “thousands of computational circuits”!

Your ridiculous “signaling” extrapolations have absolutely no relevance to the question of Turing equivalence because they are neither necessary nor sufficient conditions for it. They are not necessary because such systems can be built without them – e.g.- Babbage’s Analytical Engine – and, more significantly, they are not sufficient because systems can be built with them but still not be Turing equivalent – e.g.- digital circuits in simple electronic calculators.

Such calculators might even be built with exactly the same digital logic gates as computers. It doesn’t matter. That in itself doesn’t make a calculator a computer and it doesn’t make a calculator Turing equivalent. It doesn’t make what they do “computational” in the syntactic-representational sense of a Turing machine or CTM, because they lack the ability to implement algorithms and form semantic interpretations of the symbols they are processing, like the ones on a Turing tape. But you can’t tell that just by looking at the signals between their switching semiconductors. That is far too low a layer of abstraction to expose the system’s computational architecture. The genius of Alan Turing was to cut to the very essence of what a stored-program computer is and what it can and cannot do; the obtuseness of SamuelA is in not understanding it. Perhaps you’re just not expressing yourself very well, SamuelA, but I don’t see any other plausible ways to interpret it, and as such I’d make the following suggestion: if you did actually pay for a computer science degree, you should ask for your money back.

In the case of the brain, to intelligent people who study this field most of it is still a mystery. But not to you, of course, way over on the left-hand side of that Dunning-Kruger graph. A reasonable person at your junior academic level might think that if they’re that much in disagreement with a foundational contributor to the theory of cognition like Jerry Fodor, that maybe it’s because there’s something that they’re not understanding. Not you, of course. There’s nothing you don’t understand, and you’re never wrong. If there’s one thing that this pitting accomplishes, I hope it gets you to change your behavior, for your own benefit.

Oh man, is he still going? I’d lost track of him after his lackluster meltdown and redirection of the discussion.

Tripler
I’ve got some reading to do.

Ok, so in between insults, you’re saying that a system that can be broken into arbitrary truth tables, is, in your view, not modelable using Turing machines?

Just so we’re clear, you know what a truth table is, right? You write down every possible binary number that can be an input to a system and also every possible internal state the system has.

So if a synapse had 10 inputs, and with research you figured out that you only need 8 bits of resolution for the inputs and a single 8 bit accumulator as an internal variable, the truth table for the system would be a series of columns of numbers expressing every possible combination of inputs and internal accumulator state.

Anyways, you are making the claim that you cannot emulate this truth table with a Turing machine?

Are you sure you wish to make that claim? Hypothetically speaking, just for the sake of argument, what if you were totally wrong? If you’re wrong about this one thing, which is kind of a bigger error than misspelling a few words, could you be wrong about your assertion that I don’t know what I am talking about?

Again, obviously you feel super strongly about your hypothesis that I’m ignorant, I’m just asking if you are even capable of admitting fault, if it turns out to be you’re wrong. Because as it so happens, any truth table can be emulated by a turing machine, but I’ll have to produce an article clearly stating that. I don’t see any reason to bother if you simply are so self assured you’re right that you just don’t care.