Have there been any studies to find out whether a sentient, artificially intelligent machine would dream or have subconcious desires?
Since nobody has yet succeeded in making one, we don’t know.
Since there’s no way to objectively determine if a machine (or another human) has any kind of real inner thought life like your own, we can’t know.
Not being a slave to chamical metabolism, sleep may be entirely unnecessary for our hypothetical AI.
But I tend to think that the first successful AI will be one that arises, child-like in a machine that is designed to produce a mind - in this case, the intelligence wouldn’t necessarily have direct conscious access to its own low-level hardware functions - in exactly the same way as we are completely unaware of the firing of individual neurons. There’s no reason why a ‘grown’ AI couldn’t have a mind every bit as complex (and more) as a human, complete with odd bits it didn’t know about, plus forgetfulness, opinion, irrationality, stubbornnes etc…
But the real question is, do androids dream of electric sheep?
Well, there aren’t any yet, so there is no way of knowing; but I would suggest that
yes, an AI would dream and have subconscious desires if it was designed to have them;
and alternatively. it may be possible to design a machine which is intelligent, self aware, yet has no part of its own conscousness which is inaccessible to its own self-examination; auto-science or auto-sophance some of us in the SF world call it;
However it would be impossible for an entity to constantly examine every aspect of its own internal dialogue in minute detail;
you would end up with a withdrawn, self absorbed dysfunctional AI;
but that is the sort of challenge that lies ahead, I believe.
SF worldbuilding at
And that would be a bad thing how… ?
- August “An AI in mine own image” Derleth
thanks for the input, but now i’ve got another question. do we really know enough about psychology to know how to program a subconcious into an ai?
I think you might belive we have a bit more knowlage of AI in this world than we really do… at this point its mostly just searching and fancy searching. we don’t know anything about anything enough to program anything into an AI. we can’t program a subconcious, concious or super concious into an AI… we can’t program an AI into an AI. the best we can do is make a big long list of “if X then Y” and use it to diagnose blood diseases.
We know very little about psychology, consciousness, information, meaning, intelligence, complexity.
It’s going to make the next few decades quite interesting!
I’ll be sure to tell all the AI and cognitive sciences people who work for me that that’s all there is to their field. I’m sure they’ll be impressed.
The truth is actually a bit more sophisticated than that, although your fundamental point that we’re a long way from any form of consciousness is still quite accurate. Research in the field is divided into two categories: weak AI and strong AI. The former refers to systems which are designed to appear intelligent (such as the software my company develops), while the latter refers to systems that actually embody a form of intelligence (and the definition for what that precisely means is the subject of it’s own wide-ranging controversy).
My company’s product are ‘digital human agents,’ which allow human-like interaction between users and the system. We use AI technology and techniques in our natural language processing and reasoning engine sub-systems. The algorithms in these components would be very difficult for someone without a firm grounding in AI to comprehend, let alone maintain.