Technology doesn't work that way - SamuelA's Pit Thread

Even you you could somehow design this thing, how are you going to build it? The challenge of climbing Everest isn’t a technical one, it’s surviving in a climate that’s incompatible with life. It’s about getting up and back as fast as possible, and a lot of people don’t make it. Most of the climbers that die on Everest die on the way down, they get so ill they just can’t keep going.

How are you going to keep anyone at the top long enough do any work? And how are you going to supply the construction force? You don’t have your real life version of Disney’s Matterhorn ride up and running yet. And a construction project of this scale would make Everest 10 times the ecological disaster that it is now.

Not to mention that Everest is located on a contentious international border. And the mountaineering industry is essential to the economy on both sides. The local political environment would add a level of trickiness to this already impossible dream.

I can’t believe that this is what I’m entertaining on Saturday night, but here we are.

This one is 3883m at its highest point. Workers can survive at that altitude. Everest is over 8000m

In stages. Like I said in the OP. It wouldn’t run as a continuous cable-way, at least at first. You build a cableway a few hundred meters from your source of supplies, probably Lakla. Then a few hundred meters more. Also, you might build a smaller, temporary cableway which you just anchor with tension. With the right planning and preparation, even melting a hole in an ice flow and anchoring to the rock underneath could be used. (temporarily, to transport in the heavy equipment for large, permanent towers like they use in matterhorn, anchored to permanent geological features that don’t see ice flow

And the previous stages are where you get a supply of the filled oxygen, or filled air, or bunny suits - I’m not committed to any particular type of protective suit without a detailed study of the tradeoffs.

But you use climate controlled, protective suits, supplying the workers with pure oxygen or the appropriate gas mix for the pressure. You might use pressurization or constricting band pressure suits do exist - the pilots for the U-2 spy planes used to wear them until they switched to full space suits.

This is a straightforward problem. It’s well within human capability to perform. Now, yes, your bolded objection is noted. There might not be a way to accomplish this you would consider “ecologically friendly”, though I have to ask: what is your basis for it being an “ecological disaster”? How much life is even up in that frozen high altitude wasteland for human trash and activity to disturb?

We can now add “severe reading comprehension deficit” to Sammy’s other obvious talents. Not only did I not make any such inane statement in that GD thread about consciousness (or anywhere else), but Sammy’s comprehension-impaired delusion of what I supposedly said isn’t even coherent – it’s just a meaningless string of words. I challenge anyone to even try to guess at any kind of meaningful interpretation of that quote. It’s gibberish, much like most of what seems to go on in Sammy’s brain.

Some might recall that we had a similar discussion with Sammy right here in this thread about the computational theory of mind. At that time I was apparently more patient and certainly more naive, as I wasted a fair amount of time trying to explain things to Sammy, not realizing at the time how completely futile this always is. Anyone who even remotely grasps the principal idea here will understand that it has no relation to whatever Sammy is incoherently gibbering about, and that the main contention of the computational theory of cognition is that cognitive processes are syntactic operations on mental representations (generic symbols), and hence inherently computational and independent of the physical substrate that implements them. As I clearly explained when someone asked for a definition of just what a “symbol” is:
A “symbol” is a token – an abstract unit of information – that in itself bears no relationship to the thing it is supposed to represent, just exactly like the bits and bytes in a computer. The relationship to meaningful things – the semantics – is established by the logic of syntactical operations that are performed on it, just exactly like the operations of a computer program.
https://boards.straightdope.com/sdmb/showpost.php?p=21648677&postcount=88

I would love to understand the chain of logic that would lead anyone to read this as “symbols [are] aware of their own values”. :rolleyes:

And this is why, although sometimes I try to restrain myself from dumping on Sammy too badly, sometimes I think he just deserves it.

Gotta love the meta-questions to which Sammy is now turning his prodigious intelligence! We have hitherto been treated to seeing our hero solve most of the major problems of the world, generally with swarms of self-replicating nanobots – and always with the word “just” inserted in there somewhere to show how marvelously simple it all is, if only everyone would listen to Sammy and do exactly what he says. I think my favorite so far is how the whole climate change problem can be easily solved with “just” a bit of geoengineering, and bam! All fixed! Probably overnight! If it turns out it’s going to take a while, I suppose we can all cryogenically freeze ourselves for a hundred years or so while things sort themselves out. Sammy has the problem solved, too.

Now, this boundless intellect has turned its attention to the meta-problem of how to solve problems; that is, how to do science. It turns out – and this will come as a surprise to most scientists – that the way we have been doing science for hundreds of years is all wrong. Sammy has a better way. I don’t claim that I really understand it – but then, I lack the aforementioned prodigious intellect emanating out of that gigantic throbbing brain – but the essence of it seems to be that if we did science Sammy’s way, we could get rid of this nonsense of multiple competing theories and make scientific advances marvelously faster than we do today.

I don’t think I can really improve on Colibri’s comment to that: “the combination of ignorance and arrogance is truly breathtaking”.

As for the cat, all I can say is I feel sorry for it. If I was a cat living with Sammy, I’d probably pee all over the place, too, just out of spite. The cat may not intellectually understand the odds of being dealt such a shitty hand, but it must have a kind of instinctive understanding of the basic facts: nearly 8 billion people in the world, and it ends up with Sammy. I suggest having the cat cryogenically preserved until medical science has a solution to its understandably traumatized emotional state, a technique that Sammy assures us is 100% reliable, and has no doubt already laid plans for having his own enormous brain preserved for posterity.

Two comments
a. I agree, I was butchering your nonsense you blather about.

The relationship to meaningful things – the semantics – is established by the logic of syntactical operations that are performed on it, just exactly like the operations of a computer program.

I have no fucking idea what this is supposed to mean. I *think *you’re trying to claim it’s an unstoppable obstacle to a computer emulating a human brain, at all, in any situation, but I don’t know why. If you don’t know how a human brain works at a high level, and I don’t know how it works at a high level, and all the world’s neuroscientists do not yet have enough empirical evidence to know how it works at a high level, then how can you claim it can’t be emulated by a Turing machine?

So you just spout nonsense. I apologize for parroting your nonsense badly. You have made this “a computer can’t do what a brain does” argument probably 50+ posts in this thread, plus a bunch in the other threads on the subject, and I freely admit, I don’t understand your argument. Other than that it’s obviously nonsense.

And the reason I know that is I, many posts ago, established a model for the low level parts of the human brain, which is supported by all present evidence, and established that a Turing machine can emulate such a system. Everyone that argues with you on the subject keeps telling you the same thing.

b. Quote a single post where I talk about self-replicating nanobots as a solution to anything. I frankly don’t recall ever suggesting it, even once. This is why I get irked by people bringing it up, as these are an example of something that won’t work. (Eric Drexler’s ideas are not nanobots as you think, nor are they self replicating in the way you think)

c. The other reason it pisses me off is when I talk about self replicating robots, I never, ever, ever mean “nanobots”. I mean fucking machines with “Hitachi” and “Foxconn” stamped on them in vast factories, using more advanced forms of control than present methods (machine learning) so they are a ton smarter and more flexible. As in, our real world future that most of us here will still be alive to see.

Of course you don’t know what it means, and nor does it have anything to do with your ridiculous inference, and nor could any rational person see how you could possibly make such a ridiculous inference from that statement. More comprehension impairment, apparently. It’s been clear to me for some time now that despite all your bloviations, among the many things of which you have zero understanding are some important fundamental theories underlying computer science. What you appear to have knowledge of, if anything, is the vocation of “computer programming” rather than the science of computer science. It’s like the difference between being a research hydrologist and being a plumber. I suppose that explains a lot about some of your idiotic pontifications about things you know nothing about. It’s telling that earlier on in this thread you had to defend your interpretation of what “computational” meant by looking it up in Wikipedia, and then you got it wrong anyway.

You claim to have a Masters in compsci, and while some folks here might not believe you, I do. The reason I do is that I once had a guy working for me on a project whose “contribution” (and I intentionally put that word in scare quotes) was going to be the basis of his compsci M.Sc. project at a major university. This was decades ago and I still remember him as the most ineffable moron I have ever been stuck with, on that project or any other. His value was literally negative, because he took up my time and contributed nothing. And he did eventually get his degree. Whether or not you’re brighter than him I can’t say. I would say that you’re definitely more dangerous if left unsupervised, and that could be even worse. He would have been afraid to try things he didn’t understand. You’d be more likely to plow right ahead and in a single night of unsupervised mayhem somehow set the project back three years.

Raven self-replicating nano-bots. Gotta get the technology right.

I’m adding glacial behavior to the list of things that Sammy knows nothing about.

Latest proof that SamuelA is a fucking idiot:

https://boards.straightdope.com/sdmb/showpost.php?p=21688476&postcount=13

Well I’m willing to listen. What is this theory I am missing? How is it relevant to your position? Why does it disprove my inference?

You’re all hat and no cattle so far. I’ve given you some serious cattle. I have explained that individual nerve synapses are an analog system with noise and timing errors. I have explained how fundamental theorems of signal processing mean that finite resolution sampling can capture all of the information of a finite (real) analog system. I have explained how if you capture all the input information, and you capture all the meaningful (not simply noise) rules applied by an analog system, any Turing machine with sufficient memory can emulate the behavior of that analog system. With sufficient resolution that you can “drop in” such a replacement if you wished.

This is how audio reproduction works, noise canceling, radars, sonars, scanning electron microscopes, missile guidance systems, and so on. All of them using these processing theorems and they all work as expected.

Then, using a fundamental computer science theorem (known as divide and conquer), since the human brain is a system of trillions of synapses interconnected by finite resolution analog signals, therefore, *if *you can emulate each piece you can emulate the whole thing.

It does seem you would need at least a partially emulated body, so that all these input and output symbols you harp on about would have somewhere to go with meaning. Since quadriplegics remain sentient, it appears that you would not need a complete body.

Where’s your cattle? All I ever see from you are rants and endless walls of text on how I’m too stupid to understand anything. Or how because I frankly don’t understand your handwaving about cognitive science, that means I’m wrong about the low level theories I am familiar with.

Quoting myself,

" Obviously, for futures where your new phone fails before the depreciation on your old phone would have exceeded the inefficiency of selling it (shipping and ebay fees), you do not come out ahead."

If you are unable to understand this statement (hint, you can write an equation based on it), you are not qualified to make such statements.

OK, then listen to this. The statement that left you so slack-jawed had absolutely nothing to do with “… trying to claim it’s an unstoppable obstacle to a computer emulating a human brain, at all, in any situation”, which is, moreover, a claim that I was totally arguing against in that entire GD thread. Your reading comprehension is truly atrocious, verging on illiteracy, and, judging by the irrelevant ranting in the rest of that post, so are your reasoning skills.

The meaning of the statement that left you so clueless is important but not at all complicated. It’s astounding that it needs to be explained, especially to a self-declared genius like yourself. Most of us understand the distinction between syntax and semantics in common parlance. We understand the difference between the orthography of a sentence and its meaning. When you digitize an image, say, the result is a series of numbers, ultimately 1s and 0s. Those are symbols. They have no intrinsic relationship to anything visual, and are not intrinsically distinguishable from any other 1s and 0s in the computer’s memory. Semantics in this aspect of computational theory is the property attributed to those symbols by an agent or process that makes them the useful building blocks of an image, such as interpreting them as a matrix of pixel values, or in a different context perhaps as a string of sampled audio values, or something else that has real-world meaning, and instantiating the appropriate semantics to produce useful results. In computers, and in the brain, at least for many cognitive processes, the semantics comes from the way we process symbolic representations, and crucially is not present in the symbols themselves. In cog-sci-speak, according to this theory, which is central to CTM, the pertinent memories are said to be representational (symbolic), not depictive.

Please try to get into your comprehension-impaired awareness the fact that I have zero interest in once again getting into a debate with you over these issues. Once was more than enough, believe me.

Ok, so in our crude “neural nets”, we end up with input images, where the bits meant pixel intensities for a color channel, become the intensity of a feature that layer was looking for.

And later on, the features may be processed into abstract “state” representations, for example a a neural network trained to solve a video game may use later layers to represent game state.

I am not sure if this transformation of inputs is what you meant or not. Since ultimately the information from the input came down a specific programmed connect path. Though sensor fusion would involve multiple inputs mapping to a common state-space. The “labels” haven’t been lost, however - if my computer stores the bits for an image, it knows it’s an image because of where it is located in memory. If the brain stores a map of the environment, it knows it’s a map by the physical region where it’s stored.

Assuming this is what you meant, how is this relevant to a discussion as to whether or not it’s *possible *at all to emulate a brain or “upload” one while it’s still alive. My position has always been that emulation appears to be possible with currently available evidence, and uploading *might *be possible but whether it is possible or not primarily depends on whether a machine interface could ever be constructed that wouldn’t be rejected by biology.

Theorem wise, as the brain is a distributed network, information can traverse from one part of the network to another. Therefore, if you could artificially extend the network, you could in principle trap in a digital system information from such a network. This does not necessarily mean you could upload someone’s complete memories and personality, but at a minimum, a sniffer that can copy all visual input or motor outputs is theoretically possible. And this part isn’t theory - it’s been demonstrated in primates in thousands of separate experiments, albeit with obvious limits due to the crudeness of modern day equipment.

It’s taking longer than we thought. :frowning:

You could say it’s proceeding at a glacial pace.

I actually like SamuelA. I hope he doesn’t block me, because I genuinely want to know what happens to his kitty.

If you buy your phone back for less than what you paid for it, then you are a clueless fucking idiot. But you go on fucking that chicken.

Aaargh! I meant to say, “if you buy your phone back for less than what you sold it for.” My meaning still stands. You’re still a fucking idiot.

Sorry, man. If you buy your phone back for less money, you do come out ahead. I blame not being able to go to sleep for my fuzzy thinking. But why the hell would you do that, anyway? Anyway, I’m glad I’m not your cat.

Man alive, I can’t swing a urinating cat in a thread without hitting a hornet’s nest!

Okay, I’ll put the ten seconds into this that it derserves. . .

Here, in this very thread, you espouse the “actual world renowned experts” as offering a solution to anything.

Tripler
I’m glad I’m not your cat.

Shrug. I think I was just arguing that a variant of the technology is possible, which it is. But it doesn’t mean it would be the usual solution, we don’t use a chip fab to make cardboard boxes.