Erm. Can you be more specific? I cannot think of any trivial examples where a faster brain would make a creature less efficient at a given task. What you’re probably thinking of are examples like where a cat or a mosquito can react faster with it’s smaller brain than a human because the signals don’t have to go as far.
Now, another argument is that every real world task has an inherent “response lag” and a “perfect” control system to control a real world machine only needs an update rate that is some multiple of the system’s response to change.
An example would be a computer controlling a racecar. Every time interval (perhaps 1/60th of a second) the sensors on the racecar report new data about the car’s situation. The computer analyzes all the sensor inputs and calculates the optimal response. It then updates the state of all the control systems for the car.
If you have a computer fast enough to calculate optimal enough responses for this situation, a faster computer will not be able to get better control of the car. (to simplify it, actual real world control servos have a finite number of possible states, such as a range from 0-255 or whatever. So your numerical precision for the decisions for the racecar need to only be precise enough to set all the control servos to the closest match, and so you don’t need to process the situation further to get a “perfect” answer because your servos cannot accept input with more significant figures)
Similarly, in the human world, if your brain is fast enough to make the correct decisions in a given situation, you don’t need a faster one. You as a human exist to find food and other necessities so that you can reproduce, and for most of these tasks your brain is perfectly adequate to the task.
However, in the modern world, we’d like your brain to do tasks such as “design every component of a wide body jetliner from first principles”. Right now, that task would take any given human so long to complete that they would die of old age before finishing. If you had a faster brain, you could design new aircraft from scratch over a weekend, or new robots, or ways to build nanotechnology, or new spacecraft, or…
Cite? The Russian Setun computer used binary cells and binary levels. (Its “trits” used two cells or two wires.) And “Tri-state” logic is not three-leveled.
Cite for analog signals being stored regeneratively? That is, for cells being readable more than once? Shift registers that read a cell only to write it to another cell don’t count – the signal must be amplified so that it can both be remembered and used for processing.
A neural network can be unreliable in the sense that learning signals to change synaptic values can be somewhat haphazard. But the synapse values, once set, should not be degraded (except by further learning) since the network relies on consistency.
Not really talking about the speed of brain computation. I’m talking about the “answer” to any given situation. There is no one answer that can be considered “smarter” for all contexts and goals simultaneously.
Consider design trade-offs in any consumer product, cost vs quality etc. A product with higher quality will generally cost more to produce and a product optimized on cost will typically have lower quality. A choice must be made and that is based on a particular goal, you can’t really do both simultaneously.
Now expand that example hypothetically to an n-dimensional space of decision points related to objects (people, animals, ideas, etc.) interacting over time. You could optimize for condition X within window of time and space W, and that optimization could/will be in direct contradiction to condition X within window W’ or condition X’ within window W.
A “smarter” entity might be able to optimize towards a particular goal/set of conditions more easily than a human, but that optimization will have a “smartness” rating that spans the spectrum from “dumb” to “smart” depending on which context/set of goals it is viewed from.
Let’s pretend we have an AI that we task with optimizing human civilization with 100 years by setting conditions, low unemployment, everyone has food, etc. At the 100 year spot, there will be an unintended side effect because the optimization was localized, maybe some basic raw material will be out within 10 years. So we add to the conditions, maybe we say, make sure there are no conditions that will negatively impact humanity for thousands of years to come, and the result is eliminating all humans, so we say with population at 10billion, and the answer comes back that humans live like in 10x10 squares their entire life, etc.
To bring this back to humans and smarter than humans, the point is that what we consider smart is frequently (not always) dependent on a specific context, a specific human, a culture, etc. and is tied to specific goals and desires related to human life.
First, the answer is “no,” there is no valid argument to prove that we can’t replicate the human brain with a computer. However, there is no valid argument to prove that we can. There are a lot of interesting and illuminating arguments on both sides, though! Furthermore, my personal opinion is that some of the arguments against simply show that what we think intelligence is or is like, isn’t necessarily quite what it really is or is like. (E.g., what’s the identity of an intelligence; when does it stop and start and become another one? Does it require continuity or is it purely data dependent? This is an unanswerable question, but it makes us think about what we mean, not just by the words, but about the things we’re considering.)
Second, there is absolutely no way anyone can ever prove that a computer has awareness, because there is no way anyone can prove that ANYONE or anything has awareness other than YOU. You assume I have awareness because I’m a lot like you. As computers start acting a lot like us, many of us will begin to assume they’re aware like we are. Some people will never believe it, just as some people will never believe that people of different races are alike.
But it’s just an assumption, and that’s the very best we can ever get.
Good word, “infeasible”, assuming you mean “conventional computer of today”.
It’s not necessary to actually use special transistors. You can model the inputs and outputs on a computer. You don’t need actual parallel processing; it’s well-proven that serial processing can exactly duplicate parallel processing if it’s sufficiently faster. That is, a processor that’s a bit more than 10 times as fast can exactly duplicate the outputs of a 10-processor parallel computer. (The “bit more” is simply to handle “context switching”, where the one processor stops pretending it’s processor 1 and then begins pretending it’s processor 2. In practice this can be tiny or huge depending on the nature of the parallelism of the parallel computers. The brain happens to be the kind where it’s huge.)
I agree with Penrose in this specific instance. Reality adds a lot to the equation, so he’s right that the argument about intelligence from DNA isn’t sound, in my opinion.
Amazing, isn’t it? Of course, that’s not an objection to the possibility. It just means we need a lot more processing power and memory. We may not get there with mere silicon, but we could still get there with the same concepts that we use when we build computers from silicon. As I recall, a traditional digital computer requires three types of parts: logic gates, a sequencer, and something else I don’t recall. Perhaps it was memory, but I believe you can make memory from nothing but logic gates.
Actually, a neuron can have any number (thousands) of inputs. The only “one” thing about a neuron is that it’s either firing or it’s not (and if it’s not, it could be ready to fire or it could be recovering from just firing, but let’s lump that all together.)
Again, not correct. It can have thousands of outputs, but they all fire or do not fire, when the cell body fires. Of course, there’s a differential delay at different outputs due to axon length.
[/quote]
That seems wholly different to me, and not just in density. I’m betting there are architectures you could pull off with that that you just can’t with transistors’ on-off schema. From what I understand this is node math, and I’m betting that node math has “proven” that you can use transistors to mock-up this system, which is why comp scientists keep saying you can do this. I’m also betting that proof is wrong somehow.
The brain runs differently, it’s like thousands of separate processors, each one running at like 2 kHz, but running simultaneously and talking to each other. This is completely different from the way a computer works, which is one central processor that runs really fast. Also, the brain doesn’t depend on never making a mistake like a computer does. There are misfires all the time but the brain deals with it. Again, this is from articles I’ve read, and again it’s another type of study/science - I think networks? Anyway, the brain can deal with tons of “noise” - and you can hit a person in the head, and the whole things gets disrupted, but then recovers. This is so crazy different from a computer, which simply depends on almost perfect firing of the switches. BTW, these misfires do happen with computers, but they’re like one in a billion, and in one case, some guy accidentally got like $99,999,999 in his bank account when he went to an ATM once. The company investigated it and it turned out to be one of these extremely rare misfires.
Again, let me re-iterate I don’t fully understand what these things mean, it’s stuff I read ( I never studied that math or science or whatever that deals with “noise”).
[/QUOTE]
All close enough, but there is no reason we couldn’t model all of that complexity using a normal, sequential-processing digital computer. Admittedly, it’d have to be fast – maybe even impossibly fast to be as fast as a human, with elecrtonic parts. OK, if a single one isn’t fast enough, add processors. In well-proven theory, we can do this. In theory, we KNOW we can model ANY analog system with a digital one, to any specified precision, as long as we have enough parts and enough time to run the algorithms. Anyone saying “digital is different than analog” merely has to say how precise we need to be, and we can prove him wrong, to that precision.
Anyone who thinks that intelligence is attributable to infinite precision of analog components has a rather difficult argument to make, but might just possibly be correct.
(I doubt it.)
You don’t understand how completely a digital system can imitate any analog system, as I outlined above. This fact isn’t debatable. A computer doesn’t have to “rewire” itself to behave differently. It can change its memory or its program.
For example, given any digital computer, you can emulate it with any other digital computer (assuming the latter has enough memory and time to run the program – speed doesn’t matter here.) So, one computer can easily act like another computer that can “rewire” itself.
My point is that what computers do is process data. If intelligence is an artifact of a special kind of data processing (as I believe, but can’t prove), then the hardware doesn’t matter, as long as there’s enough of it, and enough time to run the program (or, it’s fast enough, which is saying the same thing.)
That belief is based on a basic misunderstanding of what a computer is and can do, because as I pointed out above, the wiring of a computer (in theory) is totally unimportant. Any computer wired any way can imitate any other computer wired any other way. Wires aren’t what it’s about. Information processing is what it’s about. That is, it’s what computers are about. I believe that’s what intelligence is about too, but that’s an assumption I make.
And you’re simply incorrect in this bet. If it’s a physical process that can be described to a given degree of accuracy, it can be modeled with a conventional digital computer.
Hardware makers act like computers are different, but really they’re all the same. Some are just bigger and faster than others, and some draw less power. Really. Turing proved this formally. If you believe formal math proofs, then you have to accept this. If you don’t believe formal math proofs, then you’re left with mysticism. Or that Turning goofed, and nobody’s caught it yet, of course!
Actually, the distinctions he draws here are valid and crucial!
But you’re right about this part. The problem with Serle’s arguments isn’t the setup. The setup is usually excellent. But then he ends up just making an assertion that “but, we’re not computers, so computers can’t be like us!” and I’m confident that he’s wrong.
Exactly. But the distinction between objective and subjective is very important. It just doesn’t support his assertion about observation. And ditto for Penrose.
[/QUOTE]
Well put. But it raises an important question!
When is a simulation just a simulation and when might it be more than that?
So, what’s the difference between a rainstorm and a consciousness (or intelligence, or whatever … for my purposes here, they’re equivalently applicable)?
IMHO, awareness, cognizance, intelligence, and consciousness, are CREATED by data processing. A rainstorm is not. Thinking is an emergent property of data processing. Rain is not. Rain is merely a physical process. So, simulating rain does not create rain, but simulating intelligence by data processing could in fact produce intelligence.
Note that this touches on the objective/subjective distinction mentioned above. It is a crucial distinction. If subjectivity is an emergent property of intelligence, which is an emergent property of data processing, then data processing is all that’s needed (well, the right kind of data processing, and we don’t understand what the right kind is!)
Of course, rain is not an emergent property of data processing. It’s water falling from clouds, and only water falling from clouds can create it.
The rain analogy is helpful to put this in perspective.
For those who reject the assertion that thinking is purely a function of (the right kind of) data processing, my arguments are void. I wonder what these people would propose as an alternative. Some would say the soul, and I couldn’t argue that; it’s a matter of faith. They could be right, too.
Brains are made of matter organized in a way that performs computations that create the answers we know as “human intelligence”. A computer is also an arrangement of matter to perform computations…
You would have to show that virtually all physical discoveries made by all science since the Renaissance are wrong in order to state that exactly the kind of matter in a human brain is the only way to get an entity with human like intelligence.
I think it has been said already, but the question posed by the OP is kind of nonsensical, insofar as it implies that we would want to replicate the animal brain. Presumably, the goal in AI research is rather to explore useful and/or interesting forms of machine intelligence to learn from them and/or exploit their usefulness.
Right now, we have computers that are only responsive: we tell them what to do, they do it. Perhaps the definition of “strong” AI is that a machine can be proactive, though all real action is just a response, no matter how indirect, to some form of stimulus. Should we be able to ask the smart machine, “What should we do now?” and expect a thoughtful response (reasonable but not necessarily predictable)?
One of the stages of development toward AI would have to be the evolution of programming tools, from binary code (I have done some true ML, just for fun) to “set this up for me”. Already, developers’ tools have reached a level of sophistication that, apart from machine-specific kernel code, compilers can generate ML that is better than what a human could write – the level of abstraction will continue to rise to the point that you will be able to program in the plainest language imaginable and your request will be converted into the most efficient machine-level code possible (relying in part, of course, on existing library resources).
So AI will in part be the computer’s ability to adapt its own code base and dynamically improve itself based on a rather general set of needs. And when we can tell the Gliese 581c Rover, “look around a bit and tell us what is interesting,” and get good stuff back, that is when we will know we have pretty worthwhile AI.
A true AI should also be able to pass the “meta” Turing Test: that you can ask it “If you were administering the Turing Test to someone else, to determine if they were intelligent or not, what questions might you ask them?” and get sensible responses.
Dendrites:
Dendites generate localized regional spikes forward and backward independent of the axonal spike.
Axons:
Some neurons have multiple localized axonal spiking zones (meaning they spike independently according to local conditions, not only due to a spike generated by the soma).
The Seturn used balanced ternary logic - with voltage levels of -1, 0 1. These are signalled on a single wire. So it is, without any doubt, a true ternary machine. What it needed was plus and minus voltage rails. Clearly a static ternary bit requires more than a single transistor to store it. So does a conventional binary bit stored in static memory. There is no technical reason an unbalanced ternary bit cannot be stored with a single capacitor and transistor. It can be level shifted in the memory controller.
Who the heck mentioned tri-state? Of course it isn’t ternary logic.
Why does amplification matter? These are implementation issues. They have nothing to do with the question of implementability itself. Analog computers meet the requirement - they include sample and hold and integrator elements. These buffer their outputs and can be read forever.
Not sure what you are driving at here. Is this a precondition for a thinking brain? Why do you think this is true - and why can we not meet whatever implementation requirement you suggest if it is indeed needed? It is all engineering. There are no fundamentals here. Give me the requirements and I can show you how to implement it. Focussing on the specific of how many transistors it takes is making the same mistake as EdwinAmi is making. Of course there is no exact match between synapses and a transistor. Synapses are made out of more complex building blocks, and there is no reason why we mustn’t model them with more than one transistor. There is absolutely no physical or engineering reason I can’t reach essential perfection with the model. It is just engineering with components that we have had many decades of experience creating much more complex systems with. There are a whole raft of interesting persistent memory technologies. Most of them are never going to make the mainstream as the two main games - disk and flash are so well developed that they may never get a look in. Sort of like bubble memory didn’t make it. But memristor memory tehcnology may have some traction. If we needed to build an analog semiconductor system that modelled neurons these would be a good place to start.
Discussion of ternary memory or three-level signals is a digression – their difficulty merely accentuates the difficult of N-level signalling for large N. But out of curiosity, do you have a cite for Setun doing logic with three values on a single wire? The “-1, 0, +1” in Setun literature refers to their “balanced arithmetic” – they used {-1,0,+1} (rather than {0,1,2}) for the three unit’s-place values, {+3,0,-3} (rather than {0,3,6}) for the three’s-place, and so on.
It was implementation issues that motivated my comments! Amplification is absolutely necessary for repeated computation; that follows from thermodynamic principles. Sample-hold cells will suffer from leakage – can they really be read forever? The problem is reduced with larger amplifiers, but this just points to ordinary binary logic being more cost-effective.
Storage cells that leak over time and are periodically regenerated via amplifiers was the concern of my “But the synapse values, once set, should not be degraded (except by further learning) since the network relies on consistency.” Obviously permanent multi-level storage cells are possible. But integrated onto a chip, and processed in parallel, what densities can you hope for?
Thanks for linking to an article on memristors, which I’d never heard of! They might address some of the issues I was trying to raise.
I know you are getting at hardware implementation issues, but as a side note, synapse strength can be maintained or can decay over time, lots of variability.
I completely agree with you, and I just want to add that over the years there have been so many rebuttals written against this nonsense that one could literally write quite a large book about it. When Searle wrote his original “Chinese Room” paper it was regarded as trite by the Journal of Behavioral and Brain Sciences to which he submitted it, and the response that it garnered was surprising – so much so that a few years later at a cognitive science conference Pat Hayes semi-humorously defined the field of cognitive science as “the ongoing research program of showing Searle’s Chinese Room argument to be false – and stupid.” It makes one despair of the value of well-meaning philosophers meddling in science.
Among my favorite of the myriad refutations is the one that simply says Searle is contradicting himself, since the system (the “room”) clearly does understand Chinese from a functionalist perspective (the only perspective that is meaningful) and the fact that the Englishman might claim that he doesn’t understand it at all is irrelevant since he’s just part of the implementation and not the system itself. Steve Pinker takes it further and argues that “… Searle is merely exploring facts about the English word understand … People are reluctant to use the word unless certain stereotypical conditions apply …”. Pinker quite correctly dismisses Searle’s argumentative appeal to the “causal powers of the brain” as groundless since they are intrinsically computational – “patterns of interconnectivity that carry out the right information processing”.
Basically Searle tries to argue in the “Chinese Room” that the English fellow is doing nothing more than mindlessly manipulating symbols, but the reality is that intelligence and those abstractions we like to call consciousness and volition are emergent properties of the “manipulation of symbols” – or, more generally, of the computational model of information processing, and dependent not on some mystical causative factor but fundamentally on organizational complexity.
Searle’s original paper came out around 1980 (written, I think, a year earlier) which is still technologically in the era of Eliza, a rather silly little program which pretended to be a non-directive psychotherapist in an early and very primitive effort to address the Turing Test. The program basically responded to user statements by the rote method of selecting responses based on recognized keywords or phrases, or if it couldn’t find any (which was often) it would respond by more or less repeating the statement back as a “why” question. There isn’t much to say about whether or not Eliza was exhibiting “understanding”. Today, though, we are well into the era of Watson. When Watson can take a Jeopardy game question – often cleverly worded as some obscure pun – and return a correct answer about almost any subject in the world, then if one is going to claim that the machine still doesn’t truly “understand” the question one has to start engaging in increasingly desperate semantic acrobatics about exactly what “understand” really means. I submit that as AI advances such acrobatics will get increasingly desperate and ultimately both futile and pointless.
It’s the other way around. It’s more that to folks like Searle, the brain is a mystified computer – something that can be attributed with abstractions like intelligence, consciousness, and volition because we don’t yet fully understand how it works or because we’re viewing it through the lens of subjectivity rather than functionalism. Once the implementation is understood, it becomes as Marvin Minsky famously said, “when you explain, you explain away” the pseudo-mystic hokum which is simply an emergent property of computational complexity.
Statements like that are misleading because the concept of “algorithm” is ill-defined. At some level all computers are ultimately algorithmic in the broadest sense of the term, but that can be interpreted – incorrectly and with the false implication of triviality – as meaning that higher levels of information-processing abstraction must also be algorithmic and therefore simply an uninteresting set of sequential instructions. In reality such systems can use heuristics in distinct contrast to formal algorithms, they can respond to external inputs, they can learn and adapt their behaviors, and so on. They produce results that can be surprising and impossible to predict. We write simulations to give us answers to complex problems precisely because we don’t have the knowledge – and it may be impossible – to frame a problem algorithmically.
I’m at work right now, so will need to get home to dig out some old stuff. I’m pretty sure however. The clue comes from the need for + and - rails. Conventional binary logic uses a single rail. With two rails you can signal + volts, - volts and 0 volts on a single wire, and still maintain all the useful tricks that binary logic gets you (simple comparators for logic operations). In a way this supports your point, ternary logic is a special case, and since we can’t add more rails in a useful manner, the idea doesn’t scale to more bits. A decimal logic encoded with voltage levels is feasible, but gets us into a world of pain battling issues that can be managed vastly easier if your logic levels are only referenced to the supply rails.
Absolutely binary logic is more cost effective. We are living in a world of science fiction compared to when I started in the industry, and it is built upon this extraordinary cost effectiveness.
The leakage.degradation issue is important, I would counter that nothing in life is forever, and human memories also degrade, and I think there is some evidence that connections degrade more if not used, and that constant use of a skill is needed to maintain it. So in a sense, wetware brains need refresh just like semiconductor memories. After that it comes down to engineering. A hybrid design could be built to address most issues. Current technology can’t touch neuron/synapse density. Partly because we only build chips in 2D. But we can build systems that are insanely fast in comparison, so the engineering problem is an issue of balance here. Once we get to very large systems we do build systems in 3D, my favourite example is the CM-2. A machine where form followed function to a brilliant result - the best looking supercomputer ever built bar none.
No cite, but I understand that while strong opinions exist on the subject, this isn’t a settled matter. I’ve also heard the Chinese Room characterized as a slam dunk. Personally, I don’t find the functionalist system argument particularly persuasive. The concept of emergent properties may be valid, but presentations of it have struck me as rather wooly.
But Searle doesn’t maintain that we’ll never have silicon AI. He’s sympathetic to evolutionary approaches, linked with some sort of sensory input. He’s just saying that chatterbot symbol manipulation won’t cut it, and that the Turing test is heuristic, not rigorous.
I stick with my previous statement. “Could a computer with volition, etc. be created? Sure, I think. It might happen if you build a sufficiently sophisticated brain simulator. More likely, it could happen if volition is programmed in explicitly (through methods I can’t fathom). But as a spontaneous byproduct of a really rad word processor, chess program or mortgage calculator? It’s hard to see how.”
Atari sold a chess program in the late 1970s, so it’s an applicable example. But what about about a web crawler? Could intelligence emerge from that? I’m guessing yes. The decisions it makes about its own computational directions could be comparable to that exibited by various animals. I can also imagine putting them in competition with one another -or even permitting cooperation- leading to the sort of (manipulative) arms race that spawned human intelligence.
So are you saying that Watson is conscious? It doesn’t seem so to me.