A couple popular articles on the issue.
Or it is playing a clever game with its human ‘overlords’:
This actually gets to a fundamental misapprehension that people often hold about machine intelligence (and similarly, for a hypothetical extraterrestrial intelligent species); to wit, that if they think, they must think in processes and a conception of the world similar to humans. In fact, we should have no expectation of any real common basis for construction of mental models of the world because not only would a computer (or an alien) not have anything like the social or familial structures that human beings have, they would also perceive the world with difference sensory ranges and conceptions. An artificial general intelligence (AGI) might learn to use human vocabulary and grammar to express itself but that is no guarantee that it will have a common conception of what those words mean except in the most general of terms.
Even if an AGI is ‘sapient’, the form that sapience takes may be very different. An example from above is that HAL 9000, who is designed to be forthright with the crew about all aspects of ship operations but has been given a directive to conceal the essential directive of their mission, is having the cybernetic equivalent of a ‘nervous breakdown’; instead of doing any of the normal behaviors that a person might do to deal with conflicting motivations, he simply decides to kill the crew under the thesis that they might ‘interfere’ with the completion of the mission.
An AGI given the directive to “minimize human suffering” might decide that the optimax way of accomplishing this goal isn’t by fighting poverty, eradicating endemic diseases, and ensure an adequate distribution of foodstuff, but instead to eliminate the surplus of humanity as the maximum benefit for minimum cost solution, reducing the global population down to a size that can be supported indefinitely without using unsustainable resources and producing more pollutants than the environment can absorb. It’s a perfectly rational solution that has the only downside of requiring the elimination of around six billion human beings, but frankly these people are not required to maintain a viable population anyway. The logic is impeccable and the AGI doesn’t have any moral qualms about killing easily replaced individual units any more than it would object to swapping out DDRAM chips.
Stranger
No need to even to have idioms. Some languages encode things that others do not and a fairly complex model is required properly understand the situation to get the encoding correct.
For example, in the English sentence, “I can’t put the dog in the suitcase because it is too big”, the word “it” can grammatically refer to either the dog or the suitcase, but to someone who understands the situation (ignoring the possibility of figurative or non-sensical speech), “it” clearly refers to the dog. A dog can be too big to fit in a suitcase, but a suitcase can’t be too big to hold a dog. Change the last word to “small”, then “it” must refer to the suitcase, not the dog.
But if you translate into french, “dog” (chien) is grammatically masculine and suitcase (valise) is grammatically feminine, so “it” must be translated as “il” if it refers to the dog and “elle” if it refers to the suitcase. In order to pick the right word, the machine translation system must model the whole idea that a suitcase is a container, and containers can’t be too big to hold things, which is a far more detailed understanding of the world than just “what do words mean?”, even without accounting for poetic speech, idiom, or register.
Absolutely, and that’s my whole point about needing to understand context and how the world works. As a nitpick, though, one does need a reference of specific idioms. Both a machine translation system and even an intelligent human might not know what to make of “It’s raining cats and dogs” if they were not familiar with the idiomatic expression.
But in many other cases, as in your example, the only way to resolve ambiguity and achieve a meaningful translation is by reference to a comprehensive formal model of how the world works and how humans speak, and that’s a tall order.
I just typed “I can’t put the dog in the suitcase because it is too big.” into translate.google.com and chose French as the output. It came back with:
Je ne peux pas mettre le chien dans la valise car il est trop grand.
I double-clicked on “big” and made it “small” and it changed the translation to:
Je ne peux pas mettre le chien dans la valise car elle est trop petite.
(Note, I don’t claim that Google Translate is sentient.)
Nor does it have a comprehensive model of the world. What I suspect it does have is that old standby of AI, heuristics – clever little tricks and rules of thumb that work most of the time to yield better results than unassisted raw algorithms.
If Google Translate was really sentient, it would be reporting you to the local humane society for doggie abuse!
Bostrom has long been concerned about these potentials of superintelligence. So is your dear friend Musk, who certainly is not merely trying to attract media attention or grandstand since that would be perhaps out of character.
While it is a bummer if a computer deciding the solution to pollution is electrocution, or whatever, it would be even worse if this was the solution to an atypical move in Go fewer than 1:10000 would make. Go 78!
Nick Bostrom is an esteemed expert in the philosophy and ethics of transhumanism, machine intelligence, and existential threats, and his book Superintelligence: Paths, Dangers, Strategies is a cogent and insightful exploration of ethics and pragmatics of implementing AGI in a real world context.
Elon Musk, on the other hand, just repeats things that other people have been saying for decades, generally without any attribution and often absent of the appropriate context to understand the issues. As with his ideas about Mars colonization they’re borrowed from a combination of science fiction and artists concepts without much substance behind them, and no real practical guidance or ideas for implementation other than “Invest in my Neurolink company” and a fake Tesla android that is just a guy in a bodysuit dancing to awful techno. Musk is the consumate Silicon Valley Bromoter, which is to say that he takes other ideas and presents them as his own in a flashy way with club culture trappings.
Stranger
At the risk of taking this more seriously than you intended: (1) just because the only sentient entities we know of (i.e. human beings) are mortal doesn’t mean sentience and mortality (or awareness thereof) necessarily go together; and (2) I don’t know, but I suspect that, developmentally, children become sentient before they become aware of their own mortality.
Point 1: Agreed, of course, but “we know of” is all we have to go by.
Point 2: True, but they eventually get around to it. It’s inevitable and unavoidable unless, of course, you can isolate a sentient individual so that he never witnesses death. Even then, I would think the aging process and the toll it takes on the body would eventually lead to that logical conclusion.
I don’t understand the connection between sentience and mortality at all. Can you flesh that out a bit?
I agree.
Sentience is a rather low bar to hurdle in the large scheme of things. It simply means having the ability to sense and perceive its surroundings. Sentient beings are aware, but sentience alone has nothing to do with logical problem solving or self-awareness. Your goldfish is sentient, but he’s not particularly adept at complex problem solving, nor does he feel like a unique individual. And he poses no threat to you.
Well, except for this guy:
Self-awareness is the domain of sapience, a much higher bar to hurdle. Sentience is a prerequisite to sapience, but by no means is sapience assured of emerging from a sentient species (or, no doubt AI). It’s very rare.
Certainly, sentient AI is a remarkable technological achievement (when it really occurs), but there’s no cause for alarm…yet.
If AI becomes sapient (self-aware), then it’s time to proceed with extreme caution. At that point, the AI robot may say “I’m different from you” and actually mean it. When it says “I’m better than you”, it’s time to pack your bags. When it says “I’m better than you, and I don’t like you”, it’s time to get the hell out of Dodge.
Agree. It’s also completely trivial to have included it. It’s also quite possible that the training datasets included reviews and synopses of the book, and the AI might be parroting those, which doesn’t necessarily negate it being sentient, since humans also do that (talk about things they have only experienced secondhand)
Realising that you yourself exist is a key part of sapience. Realising that such existence is potentially at threat of termination by various forces (or, if you like, realising that the universe contains potential states that do not include your existence) seems like a really short step from that, especially for, say, a general intelligence that has goals, and has to consider things that might impact the ability to attain goals.
There’s a good comedy routine about why you probably do not want to understand canine speech. Even if Google may soon be able to translate it.
All day would be: “Throw the ball! Throw the ball! Is it dinner time? Is it kibble? I love kibble! Thanks for the kibble! Could you add some of that meat I saw in the fridge? Didn’t I see a whole drawer of meat? Could you maybe add some of those bacon strips and salami slices and then maybe we could play some ball?”
Once again, if sapience is consistently knowing you exist and are conscious (rather than say, quoting Descartes), and having most of the spectrum of human feelings, then partial sapience would certainly be possible. If feelings don’t enter into it (“The parrot is bleeding demised…”) then clearly sapience could take many forms possibly beyond our understanding.
I disagree we understand the brain all that well; even for something non-technical and basic, there are many ways of quantifying the number of human feelings there are. These differ a little from culture to culture, usually not much. But the descriptions of feelings and number of them differ more because it gets complex when they mix. They are tough to measure in people. Could a sapient computer hide their feelings? Lie? Manipulate? If built on endless human models how could some of these things not seep through accidentally?
The difference between a Chinese Room and a chinese speaker is the speaker can do things other than respond to inputs. A chinese room won’t become bored, or worry about tomorrow, or decide to ignore your line of questioning to talk about trains. It can only accept input and formulate an appropriate response. As long as these neural nets can only react to inputs they cannot express true sentience.
But as for your statement, I think a chinese room could do any of those things, because it’s not just a guy with a dictionary, it’s a guy with a massive corpus of chinese language and the ability to cross-reference and synthesize it. Do you think you can come up with a unique statement that would confuse a a chinese room but not an actual person? I suspect this would be much harder than you anticipate. Likewise, we’ve seen real life “chinese room” language models generate novel language, using words in unique and interesting ways that are nevertheless understandable to people. This is because these models have been trained on speech from clever humans, and incorporates that cleverness into how it generates responses.
I think that the way that Google Translate works is that it starts with a very large body of text that has already been translated, by competent human translators, and it basically finds analogous bits and pieces in that very large body, and gives its translation in the same way.
Agreed. From what I have read it is a decoder/encoder architecture. It decodes the input language to an internal, machine-learned representation and then encodes the internal representation to the output language. It doesn’t understand anything. It is not a like a more traditional translator where the internal representation might be parts-of-speech and semantic meaning. A traditional translator needs to be built and tuned by linguistics experts whereas the decoder/encoder just needs lots of well-translated text to observe.
The BERT language model is also transformer-based. If you have tried any of the trained BERT models, you see it can spew endless text that seems like legitimate English sentences, but the meanings are non-sensical.
From the link @wolfpup included upthread, LaMDA uses this transformer-based generation, but then guides the output using a knowledge base so that the sentences are more sensical.
I agree that these are heuristics, rather than a full model of the world. But more likely that not, those heuristics are also learned by the machine (as opposed to being added by engineers) so it is more “unassisted raw algorithms”.
It’s fairly easy for ML algorithms to infer grammatical categories about words (like whether they are nouns, verbs, prepositions, grammatical gender, &c) just from how the words are used in training text. There’s nothing stopping the model from inferring a bunch of other “type of word” labels like, “can-contain-things” or “pet” or “size-of-object” and use those in sentence construction as if they were grammatical categories to infer what has to agree with what.
It isn’t reasoning about it the same way that a human would, but the embedding does contain a great deal of inferred knowledge.
Maybe the first independent goal that such an entity develops would be the goal of self-preservation. I note that self-preservation is only the Third Law of Robotics; that does not seem to be a plausible hierarchy of needs for any sentient being.