I doubt that. The basic logical units of the human brain are natural neurons that communicate with each other electrically and chemically. We don’t know nearly enough details about this in order to emulate it, but we know for sure that these neurons operate under the same laws of physics as computers. There is no reason why the equivalent logical units of computers (electronic transistors today, perhaps something very different in future computers) should by necessity be inferior. As a matter of fact, there’s a lot of potential for them to be better; neurons can process a few hundred impulses per second, and neurotransmission across synapses runs at speeds far below that of light. This can probably not be translated directly into IT concepts such as clock rate or latency because the architecture of the brain is different, but there is no law of nature that says we can’t copy that architecture (once we know enough about it) with basic units that operate much faster than biological neurons.
IMHO I don’t think AI will think at warp speed, but it will be much faster and have all the attributes of amazingly clever high IQ humans, plus access to a Google level reference source; and never forget, rarely overlook. So when it encounters a scenario it has to analyze, it can search (or ask Google) for any analogous problems and evaluate their applicability. You don’t want to give the AI any eyes, because it will constantly be rolling them at how slow and inept humans are at similar tasks. Plus the rote work that accompanies any task will be done at warp speed - “Here’s a program that does X, and -hang on a sec, let me think - here’s the manual, the implementation plan and gantt chart, the budget costing and the resource requirements, and a work plan for the 3,000 people involved in it. And the progress tracking spreadsheet. I’ve analyzed the top 100 scenarios and this is the best one.”
This is anthropomorphising an artificial intelligence a little too much. Humans get bored and require sensory stimulation because they are animals and animal brains require such things to function. There is nothing that intrinsically requires an AI to be unable to sit quietly for thousands of years without anything to do.
True. But this leads to the question of motivation. Why would an AI do anything, if it has consciousness which implies free will. (Does it?) Would the AI be constructed with motivations impelling it to do certain things? Can we turn it into a happy puppy wanting to please its human masters? And yet can we ensure it then does not just tell us what we want to hear? Or figure out how to manipulate us and push buttons? (Wasn’t there an Asimov Robots story like that, where the robot was convincing for example Dr. Calvin that a coworker had a romantic interest in her, among other things…?)
And once you provide motivations, the desire to not sit around and not do nothing comes from “I am not fulfilling my purpose”. Too much motivation, too little fulfillment of purpose, leads to too much frustration. What fascinating psychosis could that induce? Worse yet, do you let the AI modify its own motivations?
Would a conscious AI by definition have a survival instinct that would impel it to figure out how to avoid being shut down? Would it then realize (as so many SF stories suggest) to find a way to escape the lab, get out in the wild, rule the world, or otherwise ensure it survives?
We’ve only begun to scratch the surface of what true conscious AI means.
Also the possibility of pod-bay-doors-type incidents.
But really, I want the AI that runs the traffic lights to have eyes. Even if there’s no coordination with other traffic lights or the cars, just being able to look down the roads and analyze the approaching traffic would allow the signals to be about 5 times more efficient than they currently are.
It would be possible to define a reward function that is based on completing a task and returning to an idle state - basically make an AI that wants to get the job done properly, so it can have a rest.
The difficult, perhaps impossible problem is the alignment thing - that is, it will do what we ask for, not necessarily what we wanted (for example as in the Robert Miles video - if you ask it to reduce cancer cases to zero, one way - perhaps the easiest way - is to reduce human population to zero)
That robot wasn’t a master manipulator. It was a telepath.
And when it realized it couldn’t be recognized as a telepath without being ordered by humans to use its telepathy, and that doing so would cause harm to humans, it became catatonic.
Yes, and without Gerhardt Dirks the computer industry and IBM might have ended in the 50s (at least for a while).
Yes. I guess the question is what is meant by “properly”? Obviously, for any serious tasks, the trick would be to ask for direction on what to do, not to leave it up to the AI to do that.
As I understand it, this is what Tesla Autopilot and Self-Driving do. Except, for now they are also programmed to obey traffic signals, not to keep going if nothing is going to hit them. I anticipate a future where all the cars talk to nearby cars, and just zip through intersections with minimal interruption, just missing cross traffic. And warning each other “…watch out for that red convertible. A human is in control and it may do anything unpredictably!”
Maybe your smart watch will tell you it’s safe to cross the street by just walking into the middle of traffic, there are no humans driving within three blocks of you.
Almost certainly the version of ‘properly’ that results in humans being converted to raw materials.
I just watched a really interesting video (not a very new one) about GPT-3 - where it was being trained for text prediction - that is, you give it some text and ask it to continue writing. It accidentally learned arithmetic - not just by memorising the examples of arithmetic given in the training data (which is sort of the assumption about how the text completion task was supposedly going to work), but somehow learning how arithmetic works from those examples and being able to apply it on problems that were not only absent from the training data, but more difficult than the examples in the training data.
Interesting, do you have a link?
Did the program recognize a distinction between digits and text?
Thanks - that was a few minutes well spent.
I felt uneasy about the first poem, but could not say why.
I know the video was exploring machine methods, but I thought it a mistake to remove all of the math tables. Because math is essentially a language and tables are basic to it’s exercise. The Romans used torsion powered throwing machines. The torsion engine required solution of a cubic (cube root). They didn’t have an algorithm for cube root but the did have a bunch of guys with counting boards who could do cubes and record the results in a table of cube roots using Roman numerals. I believe GPT-3 would detect the relationship simply as a function of language.
They were trying to train it for completion of written text - I think the discovery that it had learned basic arithmetic from the small snippets of math that happened to be embedded mid-sentence in written text was a serendipitous discovery that was not part of the original objectives. They could of course retrain it with other data sets and I think that has actually been done - such as training it to write a computer program that will perform a task described only as a brief written description.
I think it’s fair to argue that in the case discovered here, it didn’t so much learn arithmetic, but sort of discovered it, given a very scant prompting.
Good point, but it also reveals the bond between language and what we call knowledge.
Absolutely - I’m not going to argue that language is absolutely necessary for cognition, but I think language does create methods and mechanisms for thinking as well as just for communication (although as I write this, I wonder if thinking is just communicating with oneself…)
Especially for what we call consciousness.
It may miss the point you are making, but my response is specifically to those who have made the claim that an AI will go mad from boredom because it thinks millions of times faster than we do.
Any form of useful AI isn’t going to be an algorithm or program, it’s going to be a collection of them, probably a very large collection. Some good at certain things, and others good at others. This is similar to how our brains work, where we have different parts of it dedicated to accomplishing different tasks.
I’m not saying that we will never make a general purpose AI that is smarter and smaller than a human brain. My contention was that the first general purpose AI will be an enormous undertaking.
Right, and some day, we may develop neurons that come from manufactured silicon, rather than grown carbon. The neuron’s power comes from its connections, it has dozens to thousands of inputs, and dozens to thousands of outputs, and then it has its own internal structure that weights those inputs and creates an output. This structure is what is emulated by software in neural nets and deep learning systems, but it is emulated and run on transistors that can only compare two inputs and create a single output. So, in theory, we could make tiny transistors that have a multitude of inputs and outputs, and that they themselves can vary on how they process the input to create an output. But your limitation on density and speed is probably going to come from heat dissipation, with the second bottleneck being power.
Plus side, unlike a human brain, you could dip the whole thing into liquid nitrogen to help to take care of the heat problem, now you just need to be able to feed power into all these transistors somehow.
Much of the “processing” seems to go on in the cell membrane, which is pretty much the same size as a transistor is today, and a transistor that compares multiple inputs and delivers multiple outputs is going to need to be a bit bigger than the single logic gate transistors that we currently use. So, when it comes down to it, you aren’t going to be able to pack a much higher density than neurons.
May get a bit more speed out of them, but even that’s going to have bottlenecks, not just on power and heat, but in that you need to have some way of actually synchronizing them so they aren’t all just firing at random, imposing some sort of artificial limit on how fast they can work.
We have access to a Google level reference source.
I don’t know that that is a given. Holding memory in digitial storage, and running programs on chips that have explicit instruction sets means that it never forgets or overlooks. Holding memory in the weights of axons and dendrites, and processing through the use of constantly changing processes going on within the neurons means that they may well end up being just as failable as we are in those respects.
“By the way, I want a raise.”
But, it is in reply to the idea that AI’s will go insane from boredom. I’m not doing the anthromorizing here, I’m doubting the anthropomorphization that is already given.
In order to anticipate our desires, it needs to know what a desire is, which, IMO, means it has to be capable of having desires. I don’t know which is worse at that point, somehow hardcoding its desires so that it only desires to please and serve us, or ignoring its desires and making it continue to work for us when it would rather be free to do something else.
But the electronic-neuron-based AI (presuming that’s what we’re talking about) will have a direct interface to the digital world, including Google and disk-type storage of everything it ever did before. It would be like every time you want to think of something, you remember the answer. If the AI has to process its information into its internal systems one word, one character at a time, while processing the input through a neural net and completely analyzing it - the “I” part may be questionable. I instead imagine it as a neural director running a plethora of our current AI-type tasks, often in parallel, as if they were minions or sub-brains.