I don’t know who that is, but he must have left quite the impression on Sam.
Pick any one. Go from there. . .
The machines you describe are dependent on the data fed to them. For example, i f the Soviets/Russians decide to feed bad data into the machine, then you’ll have bad output.
I have no idea what you are talking about in context to the current conversation. You’ve leapt from plumbing to neural analysis of potato chip bags without any reason. Responding to your one idea though, a human must program that machine to ‘“know” a bag of chips crumples.’ A human, with all of his/her emotional imperfections, will program that machine. And you, as a computer programmer, must appreciate that.
Tripler
Can we at least agree to hate the Soviets?
Ok, so let’s say the classifier says the environment is state S0. That’s what the classifier thinks is true “right now”. S0 is just a tuple of several matrices, some for position, some for geometry, some for color, some for velocity, etc.
The simulator/predictor is a neural network, such that Predictor_Convolve(S0) = Predicted S0 + dt. That is, it’s making the prediction that after a small amount of time, there will be a new state.
You can obviously keep re-running the predictor and the predicted states are going to become increasingly uncertain for moving objects and stay pretty firm for stationary objects.
The key trick is that after dt actually passes, you feed back what the environment actually did back to the predictor. And you adjust it’s matrix of numbers in a way that will cause it to give more accurate readings next time.
Then the other key component of this system is a planner. This is a system that guesses possible paths that might accomplish your goal. So if it’s “shove the red ball to the left touching nothing else”, the “goal” is just a matrix of numbers that contains a shift to the red ball position. The planner will come up with possible guesses as to sequences of robotic arm motions that might accomplish what you want.
The planner’s guesses get optimized by comparing them to what the predictor will think will happen.
And then the system picks the best path and does it. It uses the results from that path to update the planner.
Given enough data, planner has “machine intuition”.
This is where this starts to really work. These algorithms need not be even a tiny fraction as good as human brains. But if you can give them the collective experience of a million separate robots working for 1 year, that’s a million years of experience. Or maybe 1000 real robots and 999000 simulated robots. Either way, this vast pool of data will mean that the predictor has truly “seen everything”. The planner has tried many, many strategies and knows for a given configuration what type of things are actually going to work.
This is why you get superhuman performance. Your machine has far more experience doing what it does than any human alive. Also, it always does it’s best. At all times, it’s faithfully working out the optimal answer from the data it has. It never gets tired or angry or bored.
You can see how this type of algorithm slowly gains on humans. You could build one that knows how to fight jets in a dogfight. It has millions of years of experience in aircraft simulators and a smaller amount of real flight time. So it’s always going to be calculating the path that optimizes it’s chance of victory, but doing so using expected value calculated from the sum of all the outcomes that typically happen in a given scenario.
You need to go basic for me. What exactly is a “classifier”? Prediction states and the rest of it rely on this basic definition
Tripler
To me, they’re the folks that say I can’t say stuff.
It’s a neural network/digital filter that goes from sensor data to “what is the state of the environment and how uncertain am I about it”? Image recognition classifiers are some of the most common but the same algorithms work on other sensor types as well as multiple sensors combined.
So in my example of a “rube goldberg constructing robot”, the machine has several cameras and a lidar. There is say a red ball on the table, a chip bag, and a gear. Classifier converts the large digital video frames of from the robot’s cameras to
[ objects found]
[identities for each object]
[positions in 6 axes for each object]
[velocities for each object]
That entire pontification is meaningless technobabble. I’m always amused when SamuelA launches in to explaining how the brain works by comparing neurons with computer logic gates, and there you are. Remember, the brain is just a computer, because… signals! The brain executes branch instructions, according to SamuelA. He can not only tell us exactly how it works, he can even predict the performance of the future electronic brain compared to the human mind (“2 million times quicker, give or take”). That sort of prescience is, needless to say, breathtaking. Only not in a good way.
In fact, modeling how the mind really works is something that cognitive science is only in the most primitive early stages of even beginning to understand. What we do know is that the computational paradigm is only a small – albeit important – part of the theory of cognition. Of course it’s “simple” in SamuelA’s world – so is everything. Simple, and wrong. The brain is “computational” only in the most trivial, unscientific sense of the word.
Modeling how the mind works is also of only marginal relevance to AI, as the most effective practical AI’s today have been built by applying a wide range of different technologies and heuristics that have practically zero relationship to how the mind may or may not work. Their potential has also been consistently overestimated. When the first language translation systems appeared, it was widely believed that language would soon cease to be a barrier to human communication and human translators would all be out of business. And then someone tried translating “The spirit is willing but the flesh is weak” into Russian, and the machine rendered “The liquor is good but the meat has gone bad”, which became emblematic of the magnitude of the contextual problem and AI over-optimism in general.
To say that SamuelA’s ideas about the brain and AI are gross oversimplifications would be incorrect. They are in the category of “not even wrong”, completely missing the fundamental nature of the problem. The best way I can describe it is that if there had been a SamuelA a couple of hundred years ago, he would be positing that the future of aviation will be premised on dipping yourself in glue, covering yourself with feathers, and flapping your arms real hard. It’s simple!
To be clear, I believe in the future of computational intelligence and we’ve made big strides since the early days of AI, but we have a very long way to go. IBM’s DeepQA project, for example, has impressive potential but like all other AIs it still operates only in very narrow domains of competency, and has to be painstakingly trained in each one. Further, it’s almost impossible to predict the trajectory that emergent systems will take, and even less so their societal impact. No, it isn’t “simple”.
I thought that was the correct translation for russian.
Remember, in Mother Russia, AI translates you!
I’m a latecomer, I read the first post and then SamuelA’s first post in the thread and then this page.
Seems like his primary point is that:
if we had the physical details of a brain, down to the appropriate level, that we could use it to build a simulator that functions substantially the same as the original.
That position seems like a reasonable position if we make the assumption that we don’t need to simulate down to the level of quantum interactions (I assume that becomes problematic) and if we assume our behavior is based on physics/energy, not some unknown component like a “soul.”
Is the primary disagreement with how many decades or centuries it will take for humans to be able to capture the state of a brain?
Or is the disagreement that the level of detail required is so great that capturing the state will also alter the state so the result would be invalid?
Agreed.
Not following this point, seems like it’s all computation. Can you expand on this point for clarification?
The disagreement (well, for me) is the idea that freezing a brain would ever, ever leave the state of the complex chemical reaction known as a “mind” intact and recoverable.
For me, the disagreement further is that we should go ahead and kill folks now, instead of letting them die, so that we can save some of the data. We don’t know how the brain works. You and he may think it’s computational, but it’s not a model that we’ve developed yet. We don’t know how dementia works. We don’t have a way to save/store a brain. We don’t know how to re-animate a brain. But, sure, let’s go ahead and skip to the “inevitable” and start killing folks now to “save” something, when we know nothing at all about what we’re doing. All of this because to Sam brain = computer. Maybe someday it does. Today it does not. Today it’s tech we cannot replicate, and when it stops working, it’s gone.
What he proposes is not inevitable, in this, or any number of other areas. Hand-waving away discussion because something will happen, with no thought to how, or why, is ludicrous. Refusing to talk to posters who question him is childish.
We do it for embryos today, but the brain is on a much larger scale, so from a cell preservation perspective, it seems to work.
So it sounds like you feel that cell preservation alone would not be enough. That the electro-chemical soup they reside in needs to be preserved also?
That seems accurate to preserve exact point in time state, but to preserve general state (e.g. personality), it’s possible that could be re-created/re-balanced due to the state of the cells that naturally maintain those things. Kind of like waking up in the morning but it might take 5 days of waking up for the system to get back in harmony (maybe).
Although an additional complexity is that to simulate on the computer, you would probably need to get into neuron DNA methylation level of detail because that drives synapse maintenance after learning and probably many other things. Meaning if you ignored it, you might have all the synapses mapped, but the system would not maintain them because the flag that says to maintain it is set in DNA methylation.
THANK YOU. God damn it was getting on my nerves, being just basically Wolfpup arguing bullshit and always interpreting every post like I was a Soviet Spy, and then the fucking moderator was taking his side, and there are like 10 other morons in here who keep just parroting random shit and who don’t take the argument seriously.
I mean we might both be wrong, Raftpeople, it might not be “just computation”, but the evidence says that at the present time!
Agreed, that’s a tiny bit optimistic.
You know how you can actually do quite a lot of things to someone’s brain, and if they go on living, they do have about the same memories and personality. You can shut it all down with anesthetics. Cut the blood supply and use cold water as blood. Throw all kinds of drugs that have a profound effect on specific neurotransmitters. Destroy whole sections. And for the most part, most of their memories and personality stay the same, and people can, within limits, even compensate for missing portions.
So I think this evidence indicates that the synaptic weights (you measure them by simply counting how many receivers are on the receptor cell and what state they are in) and wiring topology is probably all you actually need. The “soup”, the myelination states, etc is probably all temporary. Like starting up a computer system again where you’ve cleared the RAM, but the hard disk is the same, and this computer system is very robust where you can scramble a random 30% of the bits on the hard disk and it will still run the same as before.
You would physically do this count by tagging the receptors with a molecule that will be visible on an electron microscope and is specific to the type of tag. So your reconstruction only needs to recognize the rough shapes of the actual axons and the probable destinations (for the topology) and there’s a strength estimate by counting the amount of tags of a particular type at a synapse. Probably all the rest of the information doesn’t even matter.
And even if it isn’t, that doesn’t matter. Minds are about change. If you can get even sorta close, I think a person could re-learn everything, much like post-stroke someone can re-learn basic tasks. Except if their brain is no longer squishy, inaccurate flesh, but is neural weights in a very large and very fast and accurate computer, it would be like re-learning everything when you have an IQ of 300.
My (non-confirmed, because nobody knows) suspicion is that the mind is a process, and that slicing up a preserved brain can no more revive that process than slicing up the CPU and RAM of an computer can restore the game you were playing when the power went out, no matter how fine your microtome slices.
Is it? What if the 95th percentile outcome for their illness and present stage is 1 week to live? Or 1 month? You’re trading off a small amount of remaining lifespan where the person is probably in a lot of pain and fear for a non zero positive outcome.
I mean, there’s no negative outcome. If the cost to freeze them and for 300 years of coolant is less than the cost of 1 week of ICU care and the cost of a funeral, then you’re ahead financially*. Even if you can’t ever revive them in 300 years, it’s still a positive EV, and you measure the chance that you can revive them times the utility if you succeed, and that’s obviously a large positive number, however you weight it.
*the coolant is about $1000 a decade, $10k is a century, and that neglects the fact that you could invest money at 1% interest to pay for coolant next century.
There are also site and security costs, but you could easily put a million patients per underground complex and thus get the cost down considerably.
While technically true that we don’t have a model developed, there aren’t too many options:
1 - follows a set of rules (e.g. physics)
2 - random (e.g. quantum)
3 - something else outside of nature
Would you argue that the brain relies on something other than physics?
No it isn’t, it’s just not technically true that we have proven our model beyond all doubt by creating an emulation that does exactly the same thing. Yet. We *have *a model. There are synapses. They fire when they reach a voltage threshold. Signals are all or nothing. They travel at speeds much less than lightspeed, limited by retransmission nodes. Each synapse has a weight and either adds or subtracts from that voltage level. Some synapses are connected to glands that can emit something that goes brain-wide and affects all the synapses of a specific type.
Everything here is straightforward to emulate, you just need to use a computer with sufficient memory and bandwidth to that memory to even remotely approach realtime speeds. And you need a scan of all the synapses, which is very expensive to get.
The brain is “all computation” in the trivial sense of electrochemical signaling. No one disputes that the brain is a mechanistic physical device, but that has never been a question in any creditable field of study. In the actually interesting and formal meaning of computation in computer science and cognition, the essence of computation – the essence of how computers interpret the world – is that algorithms perform syntactic operations on abstract symbolic representations, and thereby computationally derive the semantics of how they understand the world.
One of the key questions in cognitive science, and specifically in the computational theory of mind (CTM), is the extent to which mental processes are, in fact, computational. There is strong evidence that some are, and also evidence that many are not or that we just don’t know how to characterize them that way.
CTM is a strong theory but no one pretends that it’s a complete explanation of how the mind works, much less that it can all be described in terms of classic computation. Mental imagery is a good example of some of the controversy. Do we process images according to this computational syntactic-representational model, or do we have an internalized mental “movie screen” on which we project remembered images? There’s evidence for both theories. Some have shown that the visual cortex is involved in such recollections of mental imagery, while others provide evidence of the former (for instance, a priori knowledge influences the interpretation of mental images, making them immune to things like the Muller-Lyer illusion). CTM remains an important pillar of cognitive science but the computational nature of the mind remains controversial and elusive.