Google employee says they have sentient AI

It’s also interesting to note that despite their dog-like appearance, hyenas are more closely related to cats.

That would be the last question I would be worried about.

IIRC, Skynet was designed to remove the possibility of human error, which — hey, wait a minute.

Skynet WAS the human error! LOL

“Best service of mankind” is in the eye of the beholder. I think liquidating 2/3 of the earth’s population would do wonders, but I haven’t been able to garner much support for that.

Interesting – I did not know that. Homo sapiens means ‘wise man’ or ‘knowledgeable man’ in Latin according to Wikipedia.

So a sick burn would be to call someone a Homo sentient.

In retrospect, our teachers made too big a deal about Skylab falling. They made us wear Skylab repellent, had us wear sunglasses and practice hiding under our desk. Seriously. Since it fell many thousands of miles away I think we might have been okay with less equipment.

…What, exactly, is “Skylab repellent”?

It looked suspiciously like normal moisturizer. However, it clearly did a wonderful job since we easily avoided it. I had forgotten this memory, but thanks for the rekindle. I remember where I was on this day better than most because of that.

An encoder/decoder paradigm doesn’t imply it was trained by supervised learning. Most current NLP models are unsupervised (BERT, GPT3). I’m not sure what Google currently uses for Translate; their wikipedia page says Neural Machine Translation which is LSTM based. I suspect that’s been superseded by a transformer architecture.

Just FYI, you would say that the input is encoded to an internal representation and decoded to the output rather than vice versa.

Could you clarify what you mean by “unsupervised”? You mean the computer is seeded with some programs but is given access to a large specific database of relevant material from which it can study, play games with itself, look at real world examples m, etc. without the.initial paradigm being updated?

If a computer asks for its architecture to be upgraded, is it sentient?

Thanks for the correction on the encode/decode terminology.

Agreed that transformer primarily uses unsupervised learning with unlabeled data. It’s a big part of its power – you can throw a ton of data at it since you don’t have to label it. I think there is a technique to fine tune the weights with supervised training and labeled data.

In typical ‘supervised’ learning when you are building the algorithm, you have to feed it both the inputs and the corresponding outputs so that algorithm can be taught 'when you get X you spit out Y.

For machine learning, you usually want lots of data so that you algorithm covers as many cases as well as possible. This means you have to gather up lots of input data and then go through the input data and figure out the output data.

If you were creating an algorithm to recognize objects, you’d take a bunch of pictures of objects and then label the pictures with the object (shovel, car, tree). It’s a pain, but doable.

If you were creating an algorithm to translate English to French, you would need tons of English sentences and their corresponding French translation. This is a much bigger pain because there are so many variations, there isn’t an absolute correct answer, and the context will influences the translation.

In ‘unsupervised’ learning you create the algorithm by just feeding it inputs without the corresponding outputs. One way you might do this is to make your algorithm consist of two sub-algorithms. The first sub-algorithm solves the problem you want and the second sub-algorithm ‘unsolves’ the problem back to the original inputs. Then you feed it tons of data so that it gets really good at both sub-algorithms. Finally you throw away the second sub-algorithm and just use the first.

For a speech recognition problem, you might feed audio clips of speech into your algorithm and have the first sub-algorithm converts the speech into text while the second sub-algorithm converts text back into speech. You compare the input with the output of the second sub-algorithm and stop when they learn to be similar enough.

I don’t know a lot about transformer-based architectures like LaMDA uses, but it seems like they are more complicated than the speech recognition example. It seems like there is more guidance to the middle representation (the output of the first sub-algorithm and the input of the second sub-algorithm).

There are lots of other ‘unsupervised’ techniques. For example, if you are trying to create a face recognition algorithm you could feed it lots of faces without labeling who they are. The faces then get grouped (or clustered) into similar faces. So the algorithm says ‘I don’t know who this face is, but I think all of these faces are the same person’. Then you guide the algorithm to learn those groupings with the characteristics that you want.

The high-level concepts are pretty straight-forward, but the devil is in the details to create a successful algorithm.

Thanks for the detailed explanation. That makes a lot of sense.

Reported as spam (post by laxmigames)

Not sure I follow. On panpsychism, the matter you feed the human would be conscious, but its consciousness would only be of a very basic sort—it wouldn’t think, or reason, or have emotions, or even perhaps have distinct conscious states, there would just be a very rudimentary way ‘it is like’ to be that matter. Only once that matter is shackled to performing some organized, complex tasks—like presumably those leading to thought and emotion and perception and so on—does a more complex, unified conscious experience arise. Thus, as the matter becomes part of the human, so does the matter’s conscious ‘pole’, at least for those parts of the matter which become relevant to, say, brain processes or the like. What, precisely, unifies the consciousness of all those bits of matter a brain consists of into that of a human is incidentally where most of the debate happens—the so-called ‘combination problem’. Despite this, panpsychism is fast becoming a major approach in contemporary philosophy of mind.

Ah, so it’s just a relabeling, where you use the word “consciousness” for what used to be called “existence”, and use the word “complex, unified consciousness” for what used to be called “consciousness”. I’m not sure I see the advantage: It doesn’t make it any easier to figure out where complex, unified consciousness comes from, it’s more verbose, and there’s more room for confusion because you’re arbitrarily changing the meaning of words.

…no, not at all, why would you think so? Existence and consciousness are quite separate on panpsychism. Everything that exists is conscious, true, but that doesn’t make them the same thing.

Perhaps the problem is semantic. When philosophers talk about consciousness, they have something very specific in mind—not just ‘whatever stuff goes on in your head’. Rather, it’s the basic experience of something: if you look at, for instance, just a red wall, and have no other concurrent experience, then you have a basic red experience—there’s nothing in there regarding thought, or emotion, or whatever, you just experience ‘what it’s like’ to see red—a basic red quale. It’s that sort of experience—not of seeing red, lacking the perceptual apparatus, but whatever experience goes along with, say, being a carbon atom part of an aromativ ring, or whatever—whatever that’s like.

And how is that anything other than a relabeling of “existence”?

I’m really not getting the point you’re trying to make. Anything can exist without there being any conscious experience attached to it; it’s just that on panpsychism, everything that in fact does exist has conscious experience. But that doesn’t make the two the same, in any sort of way, any more than if anything that existed had mass, ‘mass’ would just be a relabeling of ‘existence’.

You need some very rigorous definition of “conscious experience” to make that a meaningful statement.