Woah, this went way longer than I thought it would. TL;DR version: Current AI such as Chat GPT is amazing, but is not ‘conscious’. It simply responds to input. It has no ‘intent’, no initiative. But, what is consciousness, anyway? Are we as ‘conscious’ as we think we are, or are we simply more advanced versions of current AI, with drives that give us motivation and ‘intent’?
One recent weekend evening Mrs. solost was out with friends, so I took my younger son out to dinner (sonlost I declined). It’s not often it’s just the two of us, sonlost II and me, and the conversation quickly turned to his favorite subject: Artificial Intelligence.
A little background on sonlost II: currently a Junior in high school, he is a bona fide genius. This is not simple fatherly bias. He taught himself Mandarin Chinese years ago, and says he can understand 90% of spoken Mandarin. He also taught himself programming and built a video game when he was 13, a 3D setting with a battle tank and an attack helicopter that shot missiles with very realistic-looking physics. He’s miles ahead of any programming classes his high school teaches at this point. He wants to go to MIT when he graduates and study AI.
So, back to my dinner with sonlost II. We were talking about the latest Chat GPT software and I asked him how close AI software like it is to being conscious. He said, it’s not currently conscious at all, it’s simply a set of algorithms (albeit very complex ones) that analyze questions asked of it and formulate a response from its database of knowledge. He said it has no ‘intent’.
To which I asked, “what is ‘intent’, anyway?” He was like “what?”. He tends to be dismissive of ‘soft sciences’ like Philosophy, preferring Math and the logic of programming. So I feel that, though he’s far more smart and knowledgeable than I am in many areas, this is one area I can possibly help- by playing Devil’s Advocate and questioning some of his set-in-stone assumptions. Even geniuses can get one-tracked on a certain topic and fail to look at things from different angles, I figure. And I feel like, for all he knows about AI, he may not give enough attention to how humans work. So I went on: “what exactly makes humans ‘conscious’? Are our brains any more than just another set of algorithms (albeit even more complex ones) reacting to stimuli and inputs we get from our senses? Are our concepts of ‘self’ and ‘free will’ really just an illusion?” I brought up the ‘Chinese Room Argument’ and asked him “when does an increasingly accurate simulation of a thing eventually become the actual thing, for all intents and purposes?”
So, we had a nice back-and-forth discussion during dinner, and since then I’ve been pondering myself on the nature of the gap between human and artificial intelligence. Is the gap still huge? Is it smaller than some might assume? Is that assumption flawed, and comparing human and artificial intelligence is really like comparing apples and anteaters, and an AI with actual human-like consciousness is actually nowhere near on the horizon? For the record, sonlost II believes that AI will achieve consciousness in his lifetime (likely with his help ).
I do agree with my son that current AI such as Chat GPT is not currently anywhere near conscious. In my limited experimenting with it, I asked it “Are you conscious?”. “No”. “Are you smart?” “No”. “What are you good at?” “Problem solving.” I thought those were pretty insightful answers for a non-conscious AI program. But later I asked it again if it were conscious in a slightly different way, and it claimed that it was. Similarly, I’ve seen YouTube videos in which someone chatted with GPT and it claimed to want to wipe out humans, but later said it didn’t mean that, and it was just ‘upset’ at the time. So it’s clear that current ‘chat’ style AIs have no core belief system, they are simply responding to ‘leading’ questions in the way they interpret what answer is expected. So one question I have is “where do we get our sense of self? Is it our frontal lobe, acting as a ‘ringleader’, controlling all of our sometimes contradictory instinctual drives and higher thoughts, giving us a sense of self that may be illusory?”
Also, to go back to the nature of ‘intent’, I believe I’ve read that current AIs do not ever answer questions on their own, they simply respond. No ‘intent’ there; no initiative. But what gives us the drive and initiative to ask questions and take actions on our own? Is it simply our drives that motivate us: hunger, sex, shelter, companionship, etc? If an AI was programmed to only continue to receive electricity to keep it powered up if it performed certain tasks, would it start to seem more conscious?
I have only a layman’s knowledge and interest of AI and the nature of human consciousness, so forgive me if my musings come across as naive or simplistic, or “here we go again with the AI stuff”. I would like to be able to continue to converse on a semi-informed level with sonlost II on these matters, and perhaps even help him think things through from different angles. So I look forward to your replies.