No. The illusion is real, which is why you can bend it in the first place.
An ant’s brain has about the same computing power as an old Apple IIe.
The difference, of course, lies in the programs each are capable of running. An Apple IIe is notably poor at finding resources, while ants are really good at it. An ant isn’t much use if you want to play Oregon Trail, though.
As I understand it, the entire ant’s nest uses pheromone trails and other kinds of data transfer to act as a colonial organism… a remarkable arrangement.
Even though the stories I help to write involve artificial consciousness, I do not necessarily believe that such a thing is going to happen, * but if it is possible, we should bear certain things in mind*.
It seems likely that allowing neural networks to grow and learn, while in some fashion stimulating the possible emergence of intelligence and consciousness, is the most likely way that these things will come about-
however, the resulting entity could be quite a bit different from the way many people imagine artificial intelligence now-
the entity that emerges might be more like an ant’s nest than a human.
This is where the top down input could perhaps mould such an entity into a useful and self-aware person… perhaps moral imperatives like Asimov’s Laws of Robotics could be introduces at some point, although it is a presumptuous thing to place mental restrictions on another conscious being.
Without restrictions, there is no reason for such an entity to be limited to human intelligence- very soon after a human level intelligence has emerged, a greater-than human one will start the task of designing the first montain-sized brains, who will beget jupiter-sized brains and intelligent stars (matrioshka brains)…
how long the best- designed and the best intentioned behavioural constraints on such vast intellects will last, is anyone’s guess (probably microseconds)…
Sci-fi worldbuilding at
http://www.orionsarm.com/main.html
Ah, Asimov’s positronic brains. Developed before speakers, yet somehow capable of intelligence AND irresistable constraints. But all the robots call you “boss.” It’s a trade-off, I guess.
It would probably considerably more effective to eliminate some of the human negative emotions when designing the future AIs. Stuff like anger they can do without. After all, they don’t need to bash other cave-bots on the head to compete evolutionarily. Happiness they can keep, and caring.