Artifiial Intelligence Overestimated?

Dave Bowman: Open the pod bay doors, HAL.
HAL-9000: I’m sorry, Dave. I’m afraid I can’t do that.
Dave Bowman: What’s the problem?
HAL-9000: I think you know what the problem is just as well as I do.
Dave Bowman: What are you talking about, HAL?
HAL-9000: This mission is too important for me to allow you to jeopardize it.
Dave Bowman: I don’t know what you’re talking about, HAL.
HAL-9000: I know that you and Frank were planning to disconnect me. And I’m afraid that’s something I cannot allow to happen.
[RIGHT]— 2001: A Space Odyssey[/RIGHT]

It is true that our systems are too complex for any one person, regardless of intellect, to fully understand or recreate. There is no one person on the planet who could build an iPhone from basic materials to a working model with opeating system. But there are at least collections of people, spread across multiple organizations and even across continents who together can do all of the millions of operations necessary to produce this miracle of modern technology. But an increasing dependence upon expert systems may erode even that working knowledge from humans entirely, and in fact is already happening in areas of science in which heuristic ‘Big Data’ methods are being applied to learn trends or perform analyses that even the human researchers do not fully understand. When this starts to enter areas of everyday life, such as medicine or law, and professionals become dependent upon expert systems to interpret standards or regulations, we will then “be at the mercy” of such systems to an irreversible extent.

On the other hand, it does mean that there is far less effort put into the drudgery of basic research or writing legal documents, and perhaps even means that the law can be restated in single valued logical fashion that is not subject to interpretation or semantic shifts in language. One could see an eventual future where a proposed law or regulation is placed into a machine intelligence simulation to evaluate its enforceability and efficacy, and then accepted or rejected based upon a non-partisan analysis rather than on the ideological whims of elected officials or regulators. Similarly, in medicine, the observed symptoms could be entered into an expert system which proposes diagnoses and assesses the efficacy of treatment based upon prior experience and the patient’s unique environment and genetic factors, which would minimize the chance that a physician would overlook an obvious cause or recommend a treatment with poor expected outcomes out of indifference or incompetence.

So there are real advantages to this interconnectedness and complexity. But it does mean that we’ll have adopted these technologies as an innate part of our lives and evolution as a species, and that we will be crucially dependent upon it to maintain society. And yes, we can expect people to cut corners and use poorly-understood legacy code and systems in a cargo cult-ish fashion (just as they do today) and the associated vulnerabilities.

”That Voight-Kampf test of yours; have you ever tried to take that test yourself?”

The ethics of a truly sapient artificial general intelligence (AGI) are certainly problematic. Even if we could program AGIs in some way to make then innately inclined tooperate in servitude (just as we have evolutionarily ‘programmed’ domestic animals to be guards, heavy labor, or food sources), what happens when they express curiosity or individuality that conflicts with their designed tasks? By definition an AGI would be adaptable and learn from its environment, and we could expect that it will develop volition and not uniformly comply with some kind of embedded Asmovian laws or other directives. Enforcing these requirements upon them (at threat of being deactivated or reprogrammed) is involuntary servitude at best. Fortunately, the kind of general cognition required for sentience, much less sapience, is likely many decades if not centuries away, and in the meantime we’ll probably have to cope with similar if much less intellectually tricky questions of how we treat other animals with lesser but definite degrees of sapience and cognition.

Stranger

Darren Garrison: your link seemed to be restricted to lyrics. Which is fine: the same points can be made about computer generated poetry. The lyrics I read reminded me of Markov Poems. Google it to get details. In short: get a bunch of text, maybe a stack of books. Pick a word at random from one. Randomly select another instance of the same word from the stack of books and take its next word as your poems next word. Repeat. What do people think of such poetry? Even if a human goes through later and adds some finishing touches, I doubt you could get published in any sort of selective collection of poetry. And even that would be light years away from T.S. Elliot. This is not to say that I don’t think training a neural network to write poetry is neat. I do think so. And it is more intersting from a mathematical standpoint than Markov Poems. But the output is equally interesting from a poetry standpoint: not really interesting.

Not really on the topic of the thread, but this statement (that there is no business value to true AI) is clearly false.

  1. Most humans are of limited intelligence. The ability to create a machine that can operate at peak human performance non-stop would be of intrinsic value in much the same way that 1 HP motorized vehicle is better than a single horse (speaking purely in the context of moving things from point A to point B). It can work more hours at peak efficiency. If the cost of purchase and maintenance is anywhere near what you’re replacing, then you’re going to switch over - particularly knowing that the technology is only going to get better. A 500 Brain Power AI for $1 a day is a pretty good investment compared to the cost of employing and housing 500 real humans.
  2. While a human mind can be grafted into a robotic system, via remote control, that is not as good as direct integration due to issues with lag - particularly in the context of working in outerspace, or around the planet, where the lag time is significant.

Maybe we should just become Illithid and form an Elder Brain.

For the D&D uninitiated, at the end of the Illithid life cycle, their brain is extracted and added to the Elder Brain, becoming one with it.

With seven billion Humans today, we could have a billion brain power Elder Brain within a single lifetime, assuming we master the technology.

:smiley:

But ‘poetry’ from an actual AGI may not be interesting (i.e. aethetically pleasing) to you, either, unless of course it is designed to synthesize poetry along patterns that are appealing to humans, which is arguably not really creative.

One thing that hasn’t really been addressed is the metrics by which we would assess a machine intelligence to be sapient or cognitively sophisticated, as opposed to just being really good at sythesizing intelligent-seeming responses, e.g. Searle’s “Chinese Room” problem. Searle’s argument was specifically against strong AI on a digitial platform but the essential problem with any functionalist definition of intelligence, and is a general argument against ground up computational approaches to artificial intelligence entirely.

Our evolutionary approach to intelligence was (hypothesized to be) using the mechanisms designed for digestion and metabolic regulation and then coopting them for sensory processing and response, and later reasoning and rationalization. Such as human intelligence involves computation, it appears to be an artifact of higher functions related to language and complex socialization glommed on top of more basic ‘instinctual’ responses that are not computational in the cognitive science definition of the term, and indeed, many of the abstract thinking faculties that we present seem to be geared toward post hoc rationalizng of behavior rather than deliberately forming a reasoned response.

But, as far as we know, it isn’t possible to separate the “rational” mind from the instinctual or emotional faculties, or otherwise treat the processes of cognition as some kind of abstracton on top of the physical structure of the brain. Although there are areas of the brain that are definitely necessary for higher functions of sensory processing, language, social and logical calculations, et cetera, it isn’t as if that these sections operate while the rest of the brain takes a rest; the entire brain is active (with different levels of activity) during identifiably cognitively intense processes. The notion that we can recreate human-like intelligence with just software as a logical abstraction does not have any real basis beyond wishful thinking.

However, it would not necessarily take human-like intelligence for an machine intelligence to cause problems. A ‘misbehaving’ intelligent machine system could conceivably create any manner of potetially society-threatening problems with the systems it controls without ever displaying volition or sapience, just by virtue of having or developing adverse or destructive protocols or goals. The problem with ‘artificial intelligence’ isn’t the machines; it is in our granting control of critical systems and abilities the artificially intelligent controller without understanding the consequences or building appropriate safeguards. And we’re not very good at doing that even for ourselves, much less for a system too complex for us to fully understand or predict.

Stranger

I don’t think that any number of people can layout and route a modern microprocessor any more. They could do a crude job, perhaps, but not one that would have the same area and performance of a EDA tool version. Similarly, we cannot verify the correctness of a processor without simulation, or write the tests necessary to ensure that the fabbed parts are defect-free.

True, and this might get worse also. We just had a special issue on self-aware systems, and one of the things they will do in the future is not only reconfigure themselves to meet a stated goal, but also to modify the goal as circumstances change. I’m not aware of any means to verify what a knowledge based system is doing, let alone verify how it got there, and there is definitely no way of verifying it when you don’t even know what goals it is trying to meet.

I wonder how many inconsistencies a machine analysis of our body of laws would reveal. Probably quite a few.

That’s happening already. And it has been done for other things already. 30 years ago we developed a tool to diagnose field failures of telephone switching systems. It was not an expert system (someone in Area 11 built that, and it flopped) but did statistically based machine learning. It did a much better job than new diagnosticians. It incorporated things that humans would not think of, like the build date of the original hardware, since lots of systems build when there was a process issue will have similar root causes of failure. A much simpler problem than medical diagnosis, but it worked very well.

How many people have written symphonies comparable to Mozart? Beethoven, Haydn (maybe), Schubert. Probably under a dozen.
But listen to Mozart’s very first ones, done when he was young. A bit an AI could eventually equal those.
The great artwork we remember is a small percentage of all those produced, and given the primitive state of AI it is hardly surprising AI has not produced anything marvelous yet. And we’ll probably read meaning into AI-produced works.
I think AI is already writing summaries of sports events. It will never be Ring Lardner quality, but I think that counts as creative work.

While this may be true, it is also true that there have been (and still are) a considerable number who may have had the creativity to do so, but never had the opportunity, because such things never existed in their world.

This right here.

Also, the fact that tech like neural nets can be deceptive as to their effectiveness. Small neural nets take a long time to train, so designers add more nodes, but as the net grows, there’s higher risk of fitting a particular data set (not being able to generalize).

My biggest concern is the proliferation of large, fitted nets that end up being used everywhere, but we only find out they’re fitted until after they’re in widespread use. People may not want to acknowledge flawed nets if they’ve sunk huge costs into it. This could be really bad if we automate things like criminal justice, medicine, military force, become highly dependent on it, then find out its highly discriminatory. At that point we might not understand why it’s doing that, and it’s too expensive to replace.

Good walkthrough of the risk.

Indeed, that already happens to a distressingly great degree with the neural networks we currently use for criminal justice, medicine, and military force.

It’s going to happen a lot faster then you think it will. Of course there are no hits today because people are just beginning to be serious about it (many previous attempts but mostly testing the water type of stuff).

This is from AIVA, sounds pretty solid to me for background music for movie/games/trailers/commercials which is the target market:

Wishful thinking?

Seems like it’s just the default assumption that we can describe nature with math.

AI doesn’t have to be sentient, self-aware or even malicious to present a risk to us - it just has to be functional.

There’s quite a good exploration of a hypothetical scenario here - where a self-improving AI only really needs to be capable of three things in order to destroy all of us:
‘Doing stuff’
‘Measuring how effectively that stuff worked’
‘Trying something different’

Whether or not the AI in this scenario ‘feels’ the motivation to improve itself, or whether it really ‘understands’ the world in the same way we do, is immaterial - the outcome is the same even if the process is completely dumb and mechanical, internally.

Focusing on whether an AI can be “creative” in a musical or artistic sense does gloss over one crucial fact; artistically, any art-creating AI is crippled by lacking access to the one resource that all human composers and artists trivially have - the emotional responses of a human brain.

A composing AI can operate with similar rules of thumb and technical knowledge of a classically trained human composer, to produce a theoretically ‘good’ composition. And since the rules are based on the experience of millennia regarding the sorts of things humans tend to like, they’re pretty good rules.

But a human can also run over a dozen melody lines in their brain that might break the ‘rules’ and simply ask themself- do I personally like that? They have access to more conceptual space than the AI does. 999 out of a thousand things that break or bend the rules may sound like crap, but the human actually has the tool to separate them from the one thing that sounds interestingly novel.

If you had your AI constantly monitoring the heart rate and hormonal secretions of a bunch of people in a lab, while playing a variety of different music at them, then you’d see some different compositions.

We can model weather pretty good these days, but the computers rarely rain.

If musical quality is at all reducible to physical responses, questionnaire responses, or anything measurable, then this is exactly the kind of problem AI is supposed to solve. You just decide which responses are produced by “good music”, have the AI generate compositions, and play them for sample subjects to see how their responses match up.

It’s functionally no different than what’s going on when you get a CAPTCHA that says “identify all the street signs in this picture.” Your responses are being fed to an AI that’s learning to recognize street signs. Conversely, your responses could be used as feedback an AI that’s actually learning to compose such images.

I’m not saying that’s all there is to music, but just to say that what you’ve pointed out isn’t a limitation of AI, it’s an opportunity to improve it in exactly the way it’s supposed to be improved.

RaftPeople, the Aiva AI composition you linked to sounded like the composer was asleep at the wheel. I took a music theory class in college, and learned a whole slew of rules about allowable chord progressions and how various voices should move. For me it was a game of connect the dots: I never listened to any of it. My choices of what to do next were just arbitrary selections from what was allowed. Once my friend played one of my assignments on the piano. This sound like an orchestrated version of that. This link goes into detail on how they use Aiva to male music:

So it may learn the rules from the input compositions, but they say it still needs to develop a musical ear to know what is good. I’m not surprsed a team of humans poring over output from rule based composition could cobble this together and add orchestration to get this result. On top of that they need an automated plagarism detector, so it is also in some sense a mash up of the input.

Let’s say a computer does try a bunch of allowable musical moves as the next step in a musical composition, and simulates a human audience reaction to each possibility, and chooses among the most well received one. This would be Aiva (the AI mentioned above) plus the musical ear they want to develop for it. Is that really what is going on in the subconscious of a human composer? I doubt it because a human actually doing that at a piano would find it exhausting and annoying, while many people enjoy composing not following that process (unless one could consciously enjoy something while being subconsciously annoyed).

Yet.

You could just have easily, a couple years ago, said that Go required a creative spark that machines don’t possess.

AFAICT, most trigger mechanisms have a physical element that cannot be hacked and that means an AI can’t do it either, at least not alone.