Are these networks vulnerable to adversarial examples? What about the overtraining problem, where the networks diverge after too many training examples?
These two nasty bugs - I’ve encountered the overtraining one myself - tell me that there’s some missing features in our ANNs that biology must have to prevent these failures. Because they would be catastrophic in a living organism.
The brain is very different from our current computer technology:
The brain is parallel, computers are serial (by instruction)
Brain interconnections are self organizing - there is no schematic
The brain is an analog device - not a numerical processor
The brain is electrochemical, not simply electronic - it’s program can be modified chemically
The brain is programmed by mimicry not logic
The fact that some brain functions can be modeled numerically does not mean that the brain is a numerical processor. No doubt a sufficient number of Turing engines could mimic a neuron, but that is no more than a modeling technique.
I would assume that they are vulnerable and I would also assume our biology has more advanced capabilities to get around these issues (and probably many others), but those are all just assumptions.
Are you saying that our biology is the only thing that could produce the same end result, that there is no ability to use other mediums for processing the input to arrive at the same output?
I believe current computer architectures cannot produce the functional equivalent of a human brain. They are simply machines that wonderfully emulate human activity. Still, they sense but do not feel. They calculate but do not think. We seem to have taken a semantic leap from automation to automaton.
Do I believe a non-biological ‘brain’ can be developed? No, because the brain only functions in the context of it’s biological system. Can a machine emulate most of those functions. Yes, but it won’t be thinking.
Seeley’s excellent book “Honeybee Democracy” proposes that honeybee society is organized to emulate neuronal processes. A million bugs with individual brains the size of match heads must make a life or death decision on the size, shape and location of their next hive. Seeley describes how each stage of the process resembles brain activity. The end result is that a million bugs simultaneously take to the air and navigate several miles straight to a place only a few of them have ever seen. The decision was made by the hive. The action is taken by the hive. Does the hive ‘think’.
Meaning that we would need to duplicate the exact same mechanics at the lowest possible level (e.g. quantum interactions, etc.) to arrive at the same results? That there is no lower limit that separates activity that is significant to the end result, vs being one of many possible methods of achieving that same result?
Which of these (if any) are you saying?
1 - It’s not thinking because it only emulates “most” of the functions, the ones it didn’t are the ones that we call “thinking”
or
2 - Even if it emulated all of the functions it still would not be thinking because thinking involves X and the emulation does not have X
Ultimately I would like to know why you think the computer couldn’t “think”
Some problems that can be solved by thought are problems that can be solved without thought.
A ball-disc integrator solves complex navigation problems. Nobody ever accused one of thinking. Current voice recognition systems are based on DSP not neural networks. They would better be labeled ‘voice operated switches’. They do not recognize or understand voiced utterances. Combining a large number of these anthropomorphic abilities will produce an impressive automaton. Kind of a ‘I am therefore I think’ conundrum.
And that get’s us back to the ‘define thought’ problem. I’m not smart enough to do that, but I can propose what I believe to be some characteristics of thought:
Aware - the system perceives itself in the context of it’s environment
Adaptive - presented with a dilemma the system will synthesize a solution
Vulnerable - the system can logically make a wrong decision
Self directed learning - The system can choose to modify it’s Main program
So, in the example up-thread, does the swarm of bees ‘think’?
BTW, I’m not proposing anything mystical. I believe thought is entirely the result of a physical process, but one that is very different from, and far more complex than, current approaches.
1 - Agreed that traditionally speech recognition used alternate methods, not neural networks
2 - Deep networks and recurrent networks are the current state of the art outperforming the previous methods and taking over as the preferred choice
3 - Agreed that it is currently automation without understanding
Agreed that without these types of attributes, it feels more like automation than thinking. But I do think we can build “thinking/aware” systems using something other than the stuff we are made of.
My gut response is “no” because it “feels” wrong to say that the swarm thinks, but it’s similar to if I look at an inner diagram of a brain with cells all over the place and someone asks “do the collection of neurons think?”, it “feels” wrong partially due to the perspective.
But those are my gut feelings that may not line up with my logical analysis. I need to think about what that swarm would need to do for me to “feel” like it’s thinking and aware.
I believe the OP assumes the AI is capable of feeling/experiencing qualia in similar fashion to humans (not asking how or to what degree it feels). If this is the case, then to answer the question as posed: no, there is no circumstance in which it would be ethical to create such an AI. Why? Because the OP doesn’t mention instilling the AI with the capability to feel positive states, just negative states. An existence of feeling nothing but pain and suffering would be ghastly.
Is it ethical to have children? In most cases, yes. I believe the overwhelming majority of people have children with the expectation they will have net-positive lives (even though they will experience some suffering in their lives). This is self evident. If you ask people if they wish they were never born, I’m confident the majority will reply, “no.” Instilling AI with a mixture of positive and negative emotions, similar those of humans, is no different. You expect the AI to experience a net-positive existence and in most cases, I believe they would. Some may lead unhappy lives. Some may “commit suicide.” But, most will be happy to exist.
I do believe an intelligence needs to be truly conscious (not simply simulate consciousness) in order to suffer, so creating AI with simulated consciousness and programming it to “feel” only negative states awareness may not be unethical.
I think the jury is still out on whether we humans will ever be capable of creating non-biological AI with real consciousness.
Then there are gray areas between simulated consciousness and high level non-simulated consciousness to consider. What if it’s possible to create super-intellegence with varying degrees of consciousness? Would it be unethical to create AI with the conscious level of a fish and cause it pain? If there was something positive to be gained in doing so, probably not. I’m not going to go out of my way to hurt a fish, but I’m not going to lose sleep after eating a filet of fish sandwich to sate my appetite. I would feel bad about eating a filet of bottle-nose dolphin sandwich or a McChimp burger, however.
At the other end of the spectrum, it would always be unethical to cause undue suffering to any intelligence that develops self awareness.
Questions: do you believe it will ever be possible for humans to create AI with non-simulated consciousness? Self-awareness?