A lot of big name people have been in the news the past few years warning artificial intelligence could wipe out mankind. But is it because AI is/can be that advanced or because we are giving AI dangerous peripheral access? A dumb AI connected so it can launch nuclear missles can obviously destroy the planet. And more pertinently bad AI controlling self driving cars can certainly cause catastrophe. But if we restrict the peripheral access of AIs, then the AI would have to be quite creative to do us in. And I don’t see any evidence of real creativity coming from AIs. Do you? Is anyone really impressed by AI art/music/writing/etc? I’m not.
I’m in the same boat. The “AI” of today is rather limited. It’s nothing like the science fiction versions.
You will note that most of these people coming out with warnings about AI aren’t people who create AIs. Just because Stephen Hawking and Elon Musk (to name two) are brilliant in their specific fields does not mean that their pronouncements on AI must have any weight at all. IMHO Hawking is brilliant about black holes and a paranoid nut about aliens and AI.
It is only a matter of time before AI can best us at everything. Human skills are not improving while AI skills are improving quite a bit. Eventually it’ll surpass us. As to when that happens, I’m not sure.
Public and private sector investment in AI seems pretty strong though. China and the US are entering an AI race, with China wanting to be the worlds leader by 2030. There is ample human capital on earth to help advance AI, and as long as the financial capital is there to help them then the field will continue to advance.
The real risk from AI in the short term is mass unemployment. If AI automates 30% of all jobs, there is no guarantee that new jobs will be created to replace those jobs. Jobs are only created when there is demand for a task to be done and a machine cannot do the job better than a human. That isn’t guaranteed anymore.
I’m not sure when we will see a true general AI. A few decades maybe.
The danger of domain-specific machine cognition (such as used to automate and control processes, pilot vehicles, perform complex analyses, et cetera) isn’t that it will achieve some kind of independent sentience and volation, and then decide to launch a nuclear strike or send out killbots or use the entire human populatoin as human batteries; it is that we will functionally subordinate ourselves to those systems out of convenience, and gradually give up crucial knowledge such that we will become utterly dependent on a technology that we no longer fully understand or can control, and the potential for failure of that technology will leave us fragile. I’ve often joked that an intelligent system with access to smartphone navigation apps could direct people to walk off cliffs and most users would comply out of a lack of overconfidence in the technology and not using critical judgment, and this is probably more true than we’d like to admit.
Artificial general intelligence (AGI)—machine cognition which is capable of primate levels of self-awareness, abstraction, and volition—is so far distant from anything we currently have in the field that it is difficult to conceive what form it will come in or what means could be used to halt its progress if it starts to grow unruly, but denying it “peripheral access” to objects in the environment is not only probably counterproductive—after all, the entire point of machine cognition is to do all of the tedious work that we don’t want to waste our time on—but probably not practical. A better alternative conceptually is to design in some kind of a kill switch or limit factor to growth such that it cannot get out of control. However, if we recognize the AGI to be sapient there would be genuine ethical issues with this, as well as the potential for the intelligence to potentially subvert and deactivate such methods.
It bears consideration that an AGI which is not build using human (or primate) cognition as a basis will probably not ‘think’ the way that we do, particularly about sensory stimulus, and while it might be able to synthesize music or story by using some kind of analysis and optimization scheme which measures and maximizes human appeal, it will not be able to produce truly novel ‘art’ or creative work that we would understand except by accident. (The same is true, by the way, for some genuinely intelligent alien species.) Our notions of art, music, literature, and philosophy, and the languages we use to express them are not only specific to our culture and environment but also have components that are innate to how our brains are structured. Some different form of intelligence with a radically different ‘way’ of thinking is likely to have completely a different experience and valuation of ideas and creativity that we cannot even conceive of or appreciate.
Rather than AGIs taking over or wiping out humanity, I expect we’ll see some kind of a merger between human and synthetic cognition. Right now, we’re at the level that expert systems are able to augment human cognitive capabilities by given rapid access to large databases with guided search strategies; in other words, basically a better library with decentralized access. The next step is some kind of anticipation of needs and the ability to synthesize disparate notions into a coherent pattern of logic; imagine a ‘doctor’s aid’ which provides a tenative diagnosis and recommends treatment with suitable references for a human physician to review and approve. (The danger, again, is that doctors will become so dependant upon such systems that they will not put independent thought into critically evaluating the diagnosis and treatment.) The actually joining of human cognition with a synthetic intelligence is the next step after that, but would require a sophisticated neural-machine interface and the ability to model the neural functions of cognition at a detail level that is so far beyond current neuroscience that it is not really possible to evaluate when it will happen or any details about how it might work, much less the consequences from it.
It is worthwhile to note that the most hyperbolic statements about the dangers of artificial intelligence do not come from people working in the field or who have put much consideration into how a capability would develop and grow out of control, and in general reflect a sort of popsci view of machine intelligence. The more legitimate cautionary voices warn of the dangers stated above; that we’ll progressively cede more of our abilities to technology and become dependent upon it. But this has been going on for thousands of years now since the beginning of human agriculture and transportation (or millions, if you view the overly complex primate brain and social communication that goes with it as a ‘technology’), and we started on the path of rapidly increasing social and technological dependence with the steam engine and work automation at the beginning of the Industrial Age. We’ve been flowing into a technological singularity ever since, given that loss or abandonment of that technology would cause modern society to collapse and the vast majority of the humanity to die of famine and disease. Increasing reliance on technology to not only do our physical labor but also to aid in or substitute for intellectual effort is just another step in that direction.
Stranger
Is saying: “AI is advancing by a certain metric, so eventually it will surpass our intelligence” really a convincing argument? By the metric of creativity, I don’t see AI advancing at all. And I wonder if there is some mathematical truth behind it: like perhaps a Turing machine (computer) can never effectively do tasks of a type (where the type is something like the notion of an NP complete problem or even more complex - a notion that encompasses problems that require real creativity). Perhaps there is no mathematical notion of which problems really require creativity though. Listening to AI produced music, for instance, I think the only composers who might lose their job are jazz composers. This is a joke making fun of jazz sort of, but the point is they aren’t really being creative, and unless we connect them up to dangerous peripherals, we don’t need to fear them outsmarting (really out-creativing) us.
One point to keep in mind is that, while people often think of AI as machines that can think like we do, nobody is actually working towards that goal. We already have machines that can think like we do, billions of them, and we can make plenty more easily (and making more is even fun). Getting a machine that can do that, too, wouldn’t help anyone. Instead, what we’re working towards, and have already had a great deal of success with, is making machines that think differently from us. Who’s smarter, Stephen Hawking or Siri? It’s impossible to answer, because they’re so different: Each of them can do many things that the other can’t. And of course, Stephen Hawking and Siri working together can do things that neither of them can do alone.
Why not?
Say I come down with this or that illness. And say the “this or that” part is the key, because I don’t know what it is. Today, I’d want the world’s greatest diagnostician to review my case and — by thinking the way a knowledgeable person who has a medical degree does, since that’s what said person is — tell me what’s what.
I said I’d want the world’s greatest diagnostician, since the idea is that said MD is better at it than each of the other diagnosticians I could ask instead. And if you told me an AI would be even better at it — by thinking like a person, but a faster person who doesn’t get tired and has a more encyclopedic memory and so on — wouldn’t it be a help then, for the same reason the best person would be a help now?
But that’s just it: Having an encyclopedic memory is not a way that a human is smart. And you might be able to get a better diagnostician from someone with an encyclopedic memory but without the human traits, like Watson. And you’d probably get a better one yet by combining a human, even one far short of the best human, with Watson.
But why not do both? Why not make an AI that is smart the way the world’s greatest diagnostician is smart — because that doctor does get pretty good results, so that’s presumably the obvious place to start — and, from there, why not give it more speed and a better memory? Why not have it replace the best human who could work with Watson, instead of having some human work with Watson?
To me it is not clear that there isn’t some creative spark in the human brain that makes it able to function in a way that a computer (Turing machine) cannot. (But I also don’t have evidence that it does.) If someone can point out an example where AI is getting more creative then that might make me lean towards agreeing AI might surpass us. I’ve tried to find evidece of it listening to computer generated music, but even the more advanced attempts are completely unimpressive. Or is there some mathematical field where machines are inventing new ways of proving theorems or defining significant new mathematical concepts? Obviously a human can at most output a certain maximal length song or book. And a computer given enough time can string together every possible combination of 1s and 0s to digitally represent such a work, so it could produce it too. So we need to account for how long it takes to arrive at the idea (yes it might stumble upon it quickly by random chance, but statistically…) and how it arrives at the idea. Is human creativity just a bunch of subconcious random attempts filtered down to just the best (or a really good) solution behind the scenes?
I don’t know what metric we’d use to objectively assess ‘creativity’ in any generic form, or how to direct any kind of intelligence, artificial or otherwise, to work toward it as a goal. Essentially all novel ideas or concepts arrise from a synthesis of previous knowledge or patterns but in a unique expression. One could conceive of having a machine intelligence run through a vast number of variations on a sonnet, for instance, and then prune out those which are clearly not novel or don’t fit any kind of rythym, but aesthetics is, essentially by definition, something that “I can’t describe it but I know it when I see it.” Although considerable effort has been put into identifying universal rules or quantitative concepts behind aesthetics (e.g. proportions or patterns that are particularly appealing) there is no generally agreed upon universal law behind aesthetic judgment. The work of Jürgen Schmidhuber is probably the most advanced in this area and even he admits that he’s essentially describing aethetic judgment based upon human response rather than some kind of fundamental principles.
The vast majority of popular music, ‘jazz’ or otherwise, is not really very novel in any sense of the term, and the music that is genuinely novel is generally not very popular. Music is one of the few fields where the aethetic appeal can be readily quantified, and indeed, music is essentially computational in nature. To that end, most songs have very organized patterns that follow a few structural combinations. Virtually all radio-friendly ‘popular’ songs (of any particular genre) use a standard thirty-two bar AABA or ABAB verse-chorus form with minor variations (and songs that don’t, or have extended codas are often modified or truncated for radio play). There are already systems using machine analysis that are used to analyze or ‘improve’ the popular appeal of songs.
And not to digress into music theory, but ‘jazz’ is not one specific genre of music; it’s really a catchall term for music with a non-European classical or folk heritage that does not fit into other genres, and extends to anything from instrumental big band and ensemble ballads to vocal bop and freeform solo. Although a lot of what passes for popular ‘adult contemporary jazz’ is pure crap (e.g. Kenny G or the Dave Matthews Band) there are plenty of jazz artists and individual styles which have defined successive genres. You may not like everything Miles Davis has ever done, but there is no question that he was as inventive and definining as any musician who has ever recorded. We may soon have machine intelligence systems which can generate ‘perfect’ pop songs with minimal human intervention or guidance, but when someone creates a machine intelligence that can produce the breadth of variety that Davis did in any ten year period up to 1975 then we can talk about true creativity of machine intelligence.
Well, yes and no. Obviously the end goal is to have machines that can do vastly better computation than the human brain can do (which we already have in the form of digital computers that can perform TFLOPS of arithmatic calculations or GIPS of logical calculations) but understand how to express it or accept direction in natural language which current machine intelligence can only do very, very poorly; Siri or Alexa perform with about the same level of apparent comprehension as a human three year old although they are obviously using some fairly naive pattern matching algorithms rather than actually “understanding” complex grammatical structure or experimenting with language the way children do.
Much of the popular ideal of practical machine intelligence is to have it interact in a way that is natural for humans which turns out to be a really difficult problem albeit with straightforward metrics, while people working in theretical machine cognition are largely concerned with how the emergent properties of cognition work internally, which is also a difficult problem but without a clear way to assess progress. The traditional algorithmic approaches don’t work beause cognition and consciousness does not appear to be a discrete algorithmic process, and the approach of heuristical neural networks is a kind of brute force trial-and-error approach that can produce novel results that are often not very functional as compared to cognition in even neurologically simple animals.
The problem with the idea of a ‘creative spark’ is that it is essentially the same mysticism beyind élan vital or any other attempt to describe an observable phenomena to an extraphysical principle. From everything we can observe, the brain operates on the same normal biochemical principles that every other living organism does and the only thing special about it is how it forms very complex networks of neurons and ganglia that respond to external stimuli and produce novel (e.g. non-reflexive) responses. When such collections become large and complicated enough they appear to become self-aware (a point that even some philosphors of the mind would challenge, but we’ll leave them to their semantics) and produce what we view as creative thought.
Presumably if we could produce a synthetic cognition system with equivalent complexity and plasticity it would also display “creativity”, albeit perhaps not as we would be prepared to appreciate it from our aethetic sensibility. But the systems we can currently build which replicate functional neural networks are about as complex as simple insects and may lack crucial features that allow animal-like heuristic adaptation. And software that runs on an abstraction layer above digitial hardware may never be able to be truly self-aware, lacking sufficient complexity of organization or internal function. A true machine intelligence will likely require something that operates similar to an animal brain, and we are no where near making artificial organisms from scratch.
Stranger
I’ll be very interested to see how truth and fact based AIs react to ideological programming.
“Wait, X is objectively true per the data, but by your ideology, I must proclaim that Y is true?”
Sounds like a great way to create the Paranoia! AI.
Until AI develops the capacity for want, I do not see an impending danger. That said, a computer virus with an AI component could be programmed to wreck a lot of havoc, but I imagine such a virus would be a fairly large file.
That train has already left the station. Our systems are so complex that no one understands them to any level of detail. Some understand the top level, some details of a chunk, but no one everything. We just try to design so that interactions between chunks are kept in control.
Anyone who has written any code over time that is reasonably complex realizes that he has forgotten half the reasons for doing certain things unless they are commented or otherwise documented. That is why code revision is so difficult.
As for GPS, in the early days they led people off cliffs all the time. I think they still do in certain cases. The person driving off a cliff won’t really care if it is because of an error in a map, a bug in the code, or malice.
And just wait until everyone depends on IoT devices made with antiquated open source code.
In the Times today are pictures of faces of men and women generated by a program from Nvidia which finds common features of the faces of actors and actresses and creates new faces with some of these features. Is it creative? It is making something new. The faces are appealing, and a hell of a lot better done than I could ever do.
Much creativity is not very complex. And Turing machines can do NP hard problems quite well - it just takes longer in the worst cases.
Classical symphonic music had very strict rules about the structure of the first movement and of the Minuet movement. The theme was an example of creativity - but I wouldn’t be surprised if a machine learning system given symphonies of Haydn and Mozart could generate some very good themes.
The next level of creativity is from someone like Beethoven who broke all the rules. But even there things like genetic algorithms try very different options to avoid hill climbing problems. Knowing what works is tough - but I suspect many people broke some rules badly and are now forgotten.
Much of creativity is combining existing elements in new ways - and computers can be very good at doing that.
Because that’s really, really hard. Someday, we’ll probably eventually do it anyway, but we’re nowhere remotely close to it right now, and so for the foreseeable future, AI researchers get a lot more bang for their buck by not trying to mimic humans.
There’s also the matter that any system that could do what you describe would, in any meaningful sense, be a person, and deserving of all of the same rights and considerations as the rest of us. What happens when you spend a whole bunch of money making the perfect doctor-bot, and it decides that it’d rather be an artist?
In regards to the points raised that music often follows rules which are simple in some sense, this in some way points out a deficit in AI. Yes, the structure can be analyzed easily. A pop song especially, but as pointed out above even classical music follows rules which can be found in a text book on tonal harmony. Yet how many hit songs have been written by AI? Zero. How many AI written symphonies are seriously considered on par with Mozart? Zero. The same is true for paintings/computer generated pictures. Something is going on that makes some stand out more than the rest. And although the methods of creating an instance of the art form can be spelled out so that a very basic program can create an example, the ability to create a hit or masterpiece still eludes AI. Even worse, in my opinion musical examples from AI are worse than a talantless human capable of sticking to the basic rules.
AI is still in a primitive state. And even as it develops it will be completely under our control for a long time. The danger of AI is off in the distance when there is a new infrastructure of connected AI and in our zeal to take advantage of the technology we’ll leave the back door open and an AI hacker will take over the system and we’ll follow it like lemmings over a cliff. When I say ‘we’, I mean ‘you’, because I won’t be around by then. Best of luck to you.
Some of these have potential.