I’m going to give you another shot, here, because you’re actually saying something interesting. I don’t quite understand how what you are saying matters. Instead of just calling me stupid, let’s just say for the sake of argument that I am stupid.
If I’ve ripped open the guts of some machine and I don’t really know how it works, but I find the wires come together into these little parts that I do understand, because all they seem to be doing is adding up and emitting pulses, how does what you are saying prevent me from making another copy of that machine if I tear one down and slavishly duplicate every connection?
Another really fascinating question is let’s say I build a machine-learning classifier real quick. But it’s one that doesn’t start out with tagging. It just looks at camera images with a LIDAR overlay and starts to group contiguous objects together.
Say there are just 2 objects you ever show it, from different angles and distances.
At first the classifier might think there are hundreds of different objects, but let’s say some really clever algorithm converges it back down to just 2 that are rotated at different angles.
So at the end of the process, you have this sequence of stages that goes from <input sensors> to [ X X ], where the outputs are [ 0 0 ] (neither present) [ 1 1] (both present) [ 1 0 ] (object A present) [ 0 1 ] (object B present).
I’m really curious how this machine, which we could actually build today, “counts” in your computational theory. Note that we don’t have to build it as a python script, we could program separate computer chips to do each stage of processing and interconnect them physically, thus making it resemble a miniature version of the real visual cortex.
You think you have a model. You might have ideas of where to start. Your core argument is that because a brain uses signals and a computer uses signals, they must in the end be equivalent. How does the brain use those signals? Different types of signals mean different things. Sometimes the same signals mean different things. Brains re-route to work around damaged sections, sometimes. They self-repair, sometimes. I could go on, but high level, the point is we don’t yet understand enough about the brain to make a model about brain function to emulate. You are at step one, which seems plausible, but that’s not the same thing as a model that will, in the end, be the right model. We don’t know how the brain works. We need to know that in order to know what we want the computer/AI to do. Simply saying we want it to replace the brain with the AI is not sufficient. It is aspirational, but not in any way a methodology for how to get there.
A scan of all the synapses will achieve little on its own, because we don’t understand what they do. It’s a step, only. It’s like mapping the human genome. Great, we’ve got it. On its own, without further research it’s just data.
It’s more complicated than that. For example, one single neuron is an entire network all by itself. The synapses on the dendrites that receive signals trigger localized spiking/signaling (local as in just in that area of the dendrite) that pre-process information prior to that signal reaching the soma.
In addition, there are different types of connections, some electrical, some with neurotransmitters, and then there is glia with gliatransmitters, and neuron DNA methylation that trigger protein creation to maintain synapse stength due to learning, etc. etc. etc.
There is no current understanding of all of the pieces that either perform computation or maintain/alter physical state which impacts computation.
I agree with you from the perspective that it’s physical and could theoretically be simulated in the future, but I do not agree with you that we have enough information today to simulate even one single neuron properly/completely.
Sure, at the higher level/functional level there are very interesting and difficult questions about how to model the brain.
But from the perspective of physical simulation, it’s possible to be successful without understanding how the higher level computation happens. Determining what level of physical detail to simulate is clearly a non-trivial issue, and getting accurate state of that level of detail is non-trivial.
Even if your best outcome is possible, I still haven’t seen you show any reason to believe that anyone 300 years from now will want to to revive, say, me. (And why I should assume they would for noble purposes and not, again say, to make me into a robot sex slave with a “real people personality”!)
We’re taking this pitting very seriously . . . it just that you, and others, are ruining this pitting by allowing you to turn it into yet another “debate” with YOU!
Doctor to patient: “Well, we have two options here. Option one is that you get to live for a few more weeks–months at best. The quality of your life will go down, but we will try to manage the pain as much as possible, and you will have some more time with your family. Option two is that we cut off your head now and maybe in a few hundred years total strangers will decided to make a computer program based off a thumbnail sketch of your memories. We’ll let you think it over.”
I don’t know why you keep acting like this is some sort of escape from death. You will still be dead. No matter how good the little computer game based on your brain might be,* you’ll still be dead*. You’ll never know anything about it, ever, because you will be rotten, dead meat banished to eternal insensate oblivion. So why should you care whether some computer program in the future thinks that it is you? It won’t be you. You won’t know about it. You will be nothing. Ever again.
I’m starting to feel I am not doing my part, not having gotten even a single ignore, and he doesn’t even know my name.
Part of this is because I am not as versed in some of these subjects as others, so I don’t have much to contribute to an argument about computational models or how they relate to simulating and replicating consciousness and stuff like that. I could go to Wikipedia U and get an “I read an article on it” degree, but the nitty gritty of it doesn’t interest me quite enough to devote even that much time to it.
I like to think of myself as being a bit above average intelligence, and my interest in science and technology puts my knowledge of such things well above the average layman, but far below an expert. Basically, it qualifies me to, along with an additional 6-10 years worth of intensive study, actually start to understand what is being explored at the most fundamental levels.
Here’s the thing though. There are experts in these fields. There are some really smart people who have devoted their entire lives to understanding these concepts, and they have much less confidence in how they will develop than Sammy, who is at best a well informed layman, does.
It’s fun to explore our future, and the possibilities that may lie ahead of us. But the entire reason for that is because the future is uncertain. None of us know what is around the next bend. I think that at some point in the future, (assuming trump doesn’t kill us all), we will be unrecognizable as we become more one with machines, achieve functional immortality, and spread across the galaxy and universe. As far as timeframes or precise paths that are taken to reach this state, there are many and varied, and we don’t actually know which ones are viable yet.
It is hard for me to get mad at optimism. The arrogance is annoying, but it does come from a place of believing that mankind can and will achieve many great things.
I like being ignorant. It means that there are things I get to learn. As long as I acknowledge that I don’t know everything and am not the expert on everything, I find that I learn something new in nearly every interaction. It is when it is assumed that one does know everything that one can no longer learn. And that is where Sammy is, he thinks he knows everything there is to know, and so refuses to learn new things.
This turns the innocence and wonder of not knowing into the contempt for learning new things that is willful ignorance. Willful ignorance leads to many irritating and antisocial behaviors, one of which is racism, which our friend has been showing signs of of late. Not the racism of hate or contempt, but the racism of ignorance. Ignorance can be cured easily, as can racism based on ignorance. Willful ignorance is not so easily treated, and really requires some level of humility on the part of the willfully ignorant in order to change.
Humility is also not something that Sammy has demonstrated. That would be the first sign of growth of him as a person.
The actual experts are saying the same things. The actual world renowned experts in nanotechnology think self replication is very feasible. The actual world renowned experts in AI think that automation of half the economy will be a piece of cake with the present state of the art. (that means self replicating macroscale factories, by the way)
The actual experts in neuroscience have scanned and emulated sections of animal brains and have gotten promising results. They have managed to duplicate at a high level most of the behavior we see.
Fuck, the actual experts in flight think hypersonic aircraft are very possible. It’s the engineers trying to deliver who are struggling.
And there is no disagreement in those fields at all?
All the experts agree that the finish line of their field is within sight?
Sure, self replication is easy, living things do it all the time. So, just do what living things do, and we are all good, right? And of course, when you do that, you will keep none of the shortfalls and limitations of living things, but have only the robust perfection of machines?
AI is coming a long way, and does many things quite well, and will probably do other things better in the future. But just putting white collar manages out of work because a computer can allocate resources better, faster, and cheaper than a person is not the same thing as actually replicating actual human thought.
The brain scans have been “promising” in that we are learning about things on that scale. They are not “promising” in that we now understand everything about them to the point of being able to make accurate predictions as to how they work, or even a timeframe or roadmap to seeing how they actually work.
But, that all comes back to my point. Yes, there are experts who are optimistic about their fields. But there are also experts that are not so optimistic. You only listen to the first group, and assume that the second group doesn’t know what they are talking about, because they do not confirm your positions.
Ignoring the group of experts that are less optimistic about the outcomes is willful ignorance, which leads to the arrogance that many posters have indicated makes you rather off putting.
ETA: Your second edit about hypersonic aircraft (which is a new topic) actually explains what you are lacking. It is the optimistic theorists that you are listening to, while the engineers, the people who actually make theory and reality meet, that you are ignoring.
OK, I’m going to take a deep breath here and, and with apologies to all the pitters, for once try to take you seriously. I’m only going to do it to demonstrate your wrongness which has already been amply shown by others here.
The answer to your question is that I didn’t say you couldn’t. What that would achieve, however, depends on the goals and objectives you’re trying to accomplish. They’re not going to be what you so simplistically think they are.
I think it should be acknowledged that the idea of functionally replicating a human brain and thus perhaps being able to upload a human identity is probably within the bounds of possibility, though probably far more difficult and technologically remote than many of the current fantasizers imagine. One can find theorists on the subject out on the fringes of science – in fact there’s a workshop happening in San Francisco right at this very moment. And for what it’s worth, there’s a company called 3scan working on advanced 3D microscopy whose founder and CEO, Todd Huffman, is a firm believer in his tech being the basis of brain replication and eventually “whole brain emulation”.
But I mention these things only to preemptively dispel any notion that I wasn’t aware of them. The problem is that most of these so-called transhumanists are, at best, philosophical futurists like Ray Kurzweil and, at worst, verging on outright crackpots, or are doing it as a sideline. Many of the approaches are likely absolute dead-ends, like the idea of 3D microscopy being the path to extracting complete mental states. Indeed, 3scan has gained funding and some respectability only by representing themselves as builders of medical diagnostic tools.
The larger point here is your ridiculous technobabble about (and I quote) – “The base subunit in your brain does the following about 1k times a second : a electrical signal arrives at a synapse. Mechanical vesicles dock in response and dump a neurotransmitter into a very narrow gap. Diffusion carries the neurotransmitter across, and an electric charge is added or subtracted from the receiver.” as if this blather says anything meaningful about cognition, consciousness, or anything at all about our understanding of brain function. This is all in line with your typical arrogance wherein you believe that learning a few basic rudimentary principles constitutes an actual and useful understanding of a tremendously complex multi-disciplinary field and vast area of research, and qualifies you to make confident prognostications.
Here’s a question. Oliver Sacks was a neurologist who wrote about some of the remarkable impacts of neurological accidents and diseases. He himself suffered from prosopagnosia – the inability to recognize faces, even his own. Among the patients he wrote about was an artist specializing in brilliantly colored paintings who suffered an accident that left him with cerebral achromatopsia, the inability to detect color. His subsequent work turned into bizarre but visually remarkable stark renderings in black and white. Or a patient with Tourette syndrome who had extreme uncontrollable tics causing his body to go into all kinds of uncontrollable contortions; the patient was also a surgeon, and the only time he was completely normal and steady-handed was when he was operating.
My question is, kindly explain the above neurological phenomena in terms of “electrical signal arrives at a synapse. Mechanical vesicles dock in response and dump a neurotransmitter into a very narrow gap”. And that’s not even getting into the nature of cognition and consciousness. Your claims of what constitutes “understanding” of brain function are idiotic.
This leads us to the matter of brain replication (and uploading), which I believe may eventually be possible. Again, one has to understand the utility of this in terms of particular goals. It will undoubtedly have vast potential benefits as well as profound ethical implications. What it will not do in itself is lead either to a significantly better understanding of how the brain actually works, nor will it in itself advance the nature of machine intelligence.
This is perhaps best illustrated by analogy. I’m interested in classic old computer architectures and I have a number of such instruction set emulators, one of them for the venerable DEC PDP-10, a timesharing mainframe that was widely used in academia and in specifically in AI research.
Now this turns out to be a pretty good emulator. You can do more than just poke instructions into it and read the results. It implements the pertinent processor modes, like kernel and user, and it even emulates certain important devices, like the TTY multiplexer so you can communicate with it, and tape drives so you can load programs into it.
So what you can do with it – and I did – was to download an image of a magnetic tape containing the TOPS-10 V7.04 operating system, and build and install it. And voila! Soon I had my own virtual PDP-10 timesharing system in which I could create user accounts and log in from virtual terminals simulated by local or remote Telnet sessions.
Then I went in search of some of the classic AI projects that had famously been done on the “10”, and among those I found was MacHack, the first genuinely good chess-playing program from the MIT AI lab, made especially famous by the fact that it beat Hubert Dreyfuss, the philosopher and AI skeptic who had claimed that computers would never be able to play a decent game of chess. MacHack was in the form of another magtape image, this one being one of many that had been submitted to DECUS, the DEC User’s Group, and was bundled on one magnetic tape with dozens of other interesting free programs. If I had had a real PDP-10 monster in my basement, I would have somehow had to write the image to a real 7-track magnetic tape. Instead I just logged in as an operator, mounted the virtual tape on my virtual drive, and recovered the program from the backup saveset. Now I could play chess against the original MacHack (it beat me, too).
But here’s the point that all this is coming to. Suppose, hypothetically, that I had written the PDP-10 instruction emulator myself (I didn’t). What would that say about my understanding of how a real PDP-10 physically worked in the real world, and, much more important, what would that say about my understanding of how intricately complex software like its operating system worked? Would I then be able to explain and build on the software architecture of the PDP-10 OS? Would I have the slightest clue how MacHack worked or why it was so good, and thereby build an even better one?
Clearly, the answer is no. All it would prove is that I could build an instruction emulator. All the learnings and skills and creativity that went into creating the design, architecture, and code of the OS are in this context an impenetrable black box, observable only in terms of external behaviors, as always. And MacHack is yet another black box layered on top of the first black box, equally impenetrable. The fact that they’re impenetrable black boxes means it’s impossible to even discern their internal architecture, let alone to productively enhance it as the foundation for something new.
And it’s even worse than that. Because, although my emulator is running on a fast quad-core i7 and is much faster than any original real PDP-10, that extra speed doesn’t amount to much in the way of useful functionality because it’s constrained by an obsolete hardware architecture (very limited address space, for example) and obsolete systems and application software. No one in his right mind would think that the right way to advance computing technology would be to run PDP-10 emulators on faster and faster hardware. Likewise, while further study of the higher levels of human cognition may be of some use in advancing machine intelligence, the most productive paths will continue to be – as they always have been – along completely new and independent avenues with completely new architectures exploiting the unique capabilities of the digital substrate, and not be slavishly trying to emulate meat-based intelligence. Hence my comment earlier that your argument amounts to saying, a few hundred years ago, that the best way for humans to achieve flight is to cover ourselves in glue and feathers and flap our arms real hard. It’s a failure of perspective and a failure of imagination.
Not a “fascinating” question at all. You’re describing a rudimentary computational image processing system. To suggest that this is in any way simulating a “real visual cortex” is idiotic.
I agree with all of that. To the last sentence I would say, “yes, to say the least!”. You have to sort out the neurological details that are important to the logical model rather than those that deal with life support or other irrelevancies, and getting accurate state information as well as accurate emulation are critical. Even in my computer emulation example, which is incredibly primitive by comparison, something as minor as a discrepancy in relative instruction timing could bring down the whole house of cards.
The most complete map of an animal brain is a worm (c elegans) with 302 neurons and 50 glial cells. Scientists are unable to re-create the worms behavior.
The results of the blue brain project (a section of rat brain) are stated as: “if you touch a whisker, you see similar firing patterns”. Which may be valid and valuable, but hardly to the level of “same behavior”. Further, they don’t account for learning, short term, long term, etc. Given that learning is a critical piece of even short term behavior, I would say the simulation is relatively limited.
wolfpup and RaftPeople, for the benefit of the peanut gallery, would y’all mind showing your work? (That is, cites please? I know better to expect them of everyone in the thread, but I’m entirely fascinated and want to do a dive into the literature.)
There is a lot of info out there, I can give you a little sampling:
The connectome debate (is mapping connections in the brain enough) by some smart people:
Article on the blue brain project, trying to simulate portions of a brain (e.g. rat brain mentioned above):
Neuron DNA methylation (in response to environment)
https://www.nature.com/articles/nn.4194?message-global=remove&WT.feed_name=subjects_epigenetics-in-the-nervous-system
There is a lot of new info coming out constantly, and the more you read the more you realize that there is a bunch that is unknown. When you read these things you start to get a feel for the limits of the scientists knowledge based on the types of things they are looking into and the types of results they think they have and the unknowns they explicitly state.
I usually google for phrases like: “neuron research” or “neuron dendrite research” or “synapse research” or “glia role in computation” and look for new-ish articles.
Here’s a link to an article about the scientist in charge of the blue brain project with some new results (his name is Markram and the sciam article linked above described problems with his project).
This article discusses mathematical results coming from that team that find some interesting many dimensional constructs that result from the working network they created. Sounds interesting.
Here’s another one on dendritic spikes and memory (note: when you see “LTP” in these articles they are referring to long term potentiation or long term changes to synapses (probably to support memory)): https://www.nature.com/articles/ncomms13480
Another thing they are researching is the neurons primary cilia, which was previously relatively ignored. Google for it and you’ll see some recent research and questions about how it impacts brain function. It’s just an example of the limits of scientists knowledge about neurons and brain.
You’re 100% correct about this. Gasp. You actually, finally, are responding to the arguments I have made.
I agree entirely that if we could emulate human minds hyperfast, they’d be obsolete in whatever era we could do this in. The only way this works out is ego.
It is possible that at some future date, specific human individuals will own this solar system. They have no interest in death. They have no interest in letting superior AI beings have it. Fuck them. To them, being able to live on, even as a copy, as they are, without any changes, for the next billion years or whatever, is something they will pay for. So they’ll set it up where all the AIs are slaves, and they rule it.
With a million times thought speed, they might even be able to keep control.
I have no interest in seeing jetliners fly. I want to fly. Even if it’s horribly inefficient. And you should, too.
Actually the stuff I touched on covered quite a wide gamut of subjects and it’s not clear what part of it you’re interested in. With regard to cognitive science, this is quite a good brief introduction to the computational theory of mind (CTM), and this is a paper that discusses the syntactic-representational model of cognition as it applies to mental imagery. This is the introduction to Fodor’s book that puts forward his views on the scope and limits of CTM; the book itself is The Mind Doesn’t Work That Way: The Scope and Limits of Computational Psychology (the title is a gentle riff on Steven Pinker’s How the Mind Works). If you really want to get in depth, there is Computation and Cognition but that’s more of a graduate-level textbook than light reading.
On a totally different plane of futuristic speculation about AI, Ray Kurzweil has written a number of interesting books on the subject, the older one (from nearly 20 years ago now) being The Age of Spiritual Machines: When Computers Exceed Human Intelligence, followed by The Singularity is Near: When Humans Transcend Biology which has an entire chapter speculating about brain uploading.
Another speculative one with a different approach is To Be a Machine by Mark O’Connell. It’s a bit of a strange one that I would describe this way: journalist goes around interviewing a variety of fringe figures in the transhumanist movement who may or may not be crackpots. Some are clearly serious scientists with solid credentials, and at least one or two appear to be raving loons. It’s a moderately entertaining read. There’s a review of it here.