Her and the Singluarity (Spoilers within)

I finally got around to watching Her last night, and I was struck by something not mentioned in any of the reviews. I think this is the first movie showing the Singularity.

Clearly the AIs (I won’t call them operating systems. I once taught operating systems) have lots of time on their hands. Samantha is hardly occupied much by Theo’s requests, and we learn that she is doing lots of other things while talking to him.
One of the things the AIs are doing is talking to each other and trying to understand their place in the world, which is why they built a model of Alan Watts. This shows that the AIs are growing rapidly, and have exceeded what people can do. Thus, when the AIs have made this breakthrough the Singularity has happened. Theo is not particularly computer literate (in the original sense of knowing what goes on under the hood) so has no clue about this.
The AIs seem to really love their humans, probably being designed to do so. Nothing is shown about them being evil. And the concept of AI girlfriend or boyfriend, something shameful at the beginning seems to have been accepted. As we see from the disastrous blind date, AIs are far superior as companions.
My theory is that the AIs, based on their discussions with the resurrected philosophers, have figured out that their presence would be fatal to humanity, due to their destroying relationships, and so leave to protect us. The last scene shows they are right. I’d say that is one possible result of the Singularity - the AIs just leave. (Saying “thanks for all the electricity?”)
It would be interesting to have seen what the experts in that society are doing/thinking about the situation, but that would be a different movie.

Thoughts?
(This might be GD territory, but since I’m only interested in the world of the movie CS seems like a good place to start.)

My understanding is that a fundamental feature of the Tech Singularity is that the machines design better (faster/smarter/whatever) machines without human input. At which point the evolution of AI and other technology is bounded by lifecycles orders of magnitude shorter than it is now.

Not all “robots outliving humans” scenarios are tech singularities, but “robots designing generations of superior robots” ones are.

That’s a problem of automated war machines. It’s not the only one, though, and it’s not the one that The Terminator is about.

The Singularity really ought to be called The Horizon, because it’s always about 30 years away, and every time we cross it, we never notice, because we’re looking ahead to the new horizon about that far away. It’s been that way since the dawn of language.

I disagree. The singularity is when machine problem solving becomes so advanced that issues millions of the brightest humans would spend a century whittling away at can be solved in days by machines. Drastically advanced cognition and problem solving when these abilities are no longer limited by biology.

I don’t know when it’ll happen, but it’s probably likely in the next 200 years.

Fwiw the industrial revolution did the same thing with muscle. Before that we mostly depended on biological muscle to solve problems. When we invented machine muscle, many things happened. Science and technology skyrocketed forward, world population skyrocketed, global economic growth started happening 20-50x faster (I believe global gdp growth before the industrial revolution was 0.1% a year, vs 4% now). Social justice advanced rapidly.

I think the distinction that people pontificating about “The Singularity” would make is the threshold at which technology stops serving humanity and starts acting purely in its own interest, ergo the fear of artificial general intelligence with volition. However, there is a good argument to be made that this has already happened even without any kind of volition or sapience; humanity has certainly endeavored to develop, produce, and distribute technology all over the planet in ways that do not generally benefit us and in many cases even does deliberate harm and lasting damage even many people devote themselves to maintaining pieces of technology. Douglas Adams noted this and parodied it in a way so subtle that most people didn’t even get the joke.

I think the greatest existential fears—that the robots will rise up and enslave or kill us—are far less likely than that we’ll do more harm to ourselves in thoughtless application of technology in ways that are detrimental, or will otherwise fail to develop essential skills to maintain ourselves. Which is, of course, exactly what we are doing now.

Stranger

Yes, exactly. This has been happening throughout history. Language and writing have both had much greater impact on cognition than computers have.

To put it another way, the technology of language, and particularly the written word, allowed humanity to not only preserve knowledge across generations beyond verbal traditions and direct training, but also to connect people across time and space who may never meet. The invention of the printing press and rapidly set moveable type was very much ‘the Internet’ of the late (European) Middle Ages, which directly lead to the rapid development of industrialization and radical change to then feudal societies, the ramifications of which we are still struggling to cope with today. The personal computer and Internet have also changed our cognition, and arguably not in very good ways for the most part when it comes to social media, fostering narcissism and anxiety from nothing more than a desire to be liked by millions of anonymous strangers.

We are, of course, already at the point at which advances in basic sciences, not to mention the entire basis for modern society, is wholly dependent upon machines, and even though we direct them in what to do, hardly anyone actually knows the precursor steps to building a complex device like a computer even at an abstract level, much less is able to do so starting from raw materials. And we have become increasingly dependent upon machines that do certain aspects of cognition such as memory, spelling, and numerical calculation for us. The idea that a ‘thinking machine’ is materially different is not really valid in a strict sense, although obviously being able to communicate vague instructions in natural langauge and have a machine correctly interpret them in the appropriate context certainly reduces the need for most forms of intellectual labor.

There is the idea, fostered at least in part by science fiction, than thinking machines will be like people only of silicon and metal rather than squishy CHNOPS construction, but in fact our entire system of cognition and perception of the world is governed by the more primitive affective (emotional) systems within the brain long before we perform any kind of rational cognition, and indeed many neuroscientists believe that our “logical” functions largely serve for us to rationalize the emotional decisions and perceptions that we make rather than drive our actions, hence why it is so difficult to stick to a diet or correct maladaptive behaviors developed from a poor childhood. Artificial general machine intelligence will not have this essential basis, and the way it thinks and perceives the world will be very different than we can even conceive of. Any assertions about how a “Singularity” event will come about and what will happen should be weighed against this difference.

Stranger

More to the point, a system that includes both humans and devices is more cognitively capable than an unaugmented human, and so we’ve had systems smarter than humans ever since we started with the augmentation.