Future of AI

I think that’s misleading though. At the end of the day, Siri, Cortana and Alexa are fancy voice-activated search engines. Barely better than the voice activated SYNC stereo system in my 2010 Ford Taurus. I still get more "I’m sorry, I don’t understand what you are asking"s if I don’t tell Alexa to turn up the temperature just right.

That is to say, in some ways, sure, they far exceed human capacity to process data and return queries. But in terms of any real “intellect”, my 3 year old is smarter. Particularly in terms of knowing what I meant (and how to avoid doing it).

To me, this uncomfortably echoes Roger Penrose’s snarky question, asked of a (hypothetical) intelligent computer: “But what does it feel like?” His intention is to show that the computer can never answer that, because it never “feels” anything.

I prefer not to go too far down that road into the neverland of qualia. The computer “understands” what you mean when you tell it to turn down the thermometer…because it turns down the thermometer.

We assume that other people “understand” words the way we do, based on the obvious isomorphisms between us. But by a more behavioral interpretation, we’ll know that a computer “understands” concepts, when it uses them correctly in conversation and in action.

(In practice, the danger is sometimes in the other direction entirely, as we have a tendency to anthropomorphize objects. A little child talking to a dolly is treating the doll like a real person; in our world, people have succumbed to that when conversing with Siri and other such tools.)

(Some years ago, at work, my boss was logging in to a system. The system had a little “friendly” feature built in, and sometimes asked, “How are you today?” My boss was feeling snippy, so he typed in, “Oh, f*** you.” The system responded, “I won’t have that kind of language” and logged him out. My boss spent several minutes seriously wondering if the system had actually gotten angry with him.)

Never heard of turning down a thermometer. Does that hurt its feelings at all?

Well, that’s simplicity itself, isn’t it? I’m guessing it’s just a matter of time before someone sets an AI to work on a program that wouldn’t be stymied by such a question: a program that will not fail the Turing test.

And, with that, we’re back to my idée fixe.

How much longer will that be the case though? several decades at most I’d wager.

A machine doesn’t have to be self aware or have subjective experience to be intelligent as far as I can tell. As long as it can recognize a problem and create a solution that will make it intelligent enough for our purposes. The underlying goal of AI is to create machines that make it easier to identify and solve problems. Right now they are good enough at this that market forces should create large incentives to invest in this technology. Not only that, but the public sector has a strong incentive to stay on the cutting edge of this type of thing to avoid a hostile nation overtaking them. Machines get smarter because human capital and financial capital is invested in finding ways to make them smarter. Both the public and private sector have strong incentives to create smarter machines. The trendline seems clear, even if as of 2017 they aren’t anywhere near exceptional across the board.

But the question would be if the program synthesized false emotions, or if it monitored itself to check on something like progress towards goals versus planned progress.
This very discussion was in 2001: A Space Odyssey, so it is not new.

And people smarter than either of us said this 45 years ago. Hell, we used a textbook from 1959 or so which said it. It’s taking longer than we thought.

And here again we have the question of what AI means. In terms of smart machines, pretty much all the goals from 45 years ago have been met and more. Pat Winston was happy that a chess program could beat Dreyfuss - today they beat grandmasters.
But I’m not sure we are any closer to truly intelligent machines than we were then.

We are very good at solving specific problems through learning - problems that are way to hard to produce a program to solve directly. We are bad at implementing introspection.

All apologies if I was unclear; my point was, one way to guarantee that a program would never be stymied by such a question – and, for that matter, to guarantee that it’d never fail the Turing test – would be to just eradicate humanity, which neatly solves the problem by satisfying the conditions as stated; nobody will catch it synthesizing false emotions, so long as nobody is doing anything at all.

Sorry for responding to myself, but it is even wronger than I thought. I fell into the trap they fell into, which is counting IQ as some kind of absolute measure of intelligence. It of course really measures the distance from the mean of intelligence test scores (not saying these correlate to real intelligence) which is assumed to be a Gaussian distribution. An IQ of 140 is not 40% better than one of 100, it is two standard deviations better.

Computer intelligence, even as measured in this paper, is never going to be a Gaussian, and comparing different programs makes no sense at all in terms of IQ.

I don’t know if this paper is accepted or not, but if I were reviewing it I’d probably rate it accept with major changes, and tell the authors to expunge all mention of IQ from the paper. Given the press coverage treating their results as some kind of real IQ, the reason for that should be obvious.

I was thinking of less extreme solutions. But your idea works - and also solves the climate change problem.

I think that in the near term, AI could be the means of the next great paradigm shift after the industrial revolution. Machine learning could answer questions we didn’t think to ask in the first place. As far as a rouge AI taking over, I don’t think we have anything to worry about for a while. Machines aren’t conscious and don’t have feelings, or ambition etc. As such there isn’t an impetus for an AI takeover.

Ha! That was a good comeback, I wasn’t expecting that. I don’t have a witty retort.

I think what you say has a lot of validity, but it doesn’t necessarily negate the point I am making.

I keep reading these hand wringing analyses of what will happen when capitol investment is the only investment businesses will make and none the conclusions ever seem to come from looking at the world as it exists now or as it existed throughout history. Technology will change but what indication is there that human nature will? This is not to say that people are evil, it is more to say that once people have something that divides groups, any needs, if not the humanity itself, of the out group often go iqnored.

What this technology will bring is a new form of power disparity. People used to being at one end of the power divide may end up at the other. There are numerous examples throughout history of human behavior in relation to large gaps in power. It has often lead to slavery, annihilation, neglect of the less powerful.

It’s not all doom and gloom though; all the WHO analyses I have seen point to improvements in living conditions. Many of these improvements, AFAICT, can be attributed to the altruism of people and organizations in the developed countries working tirelessly and selflessly to help the less advantaged.

I just hope that whoever the people are that end up having control over the new technology are, like, really nice guys.

The rouge takeovers are the worst. Ask the McCarthyites how scared of Reds they were. :slight_smile:

The machine between your ears is apparently conscious. And apparently has ambitions. You being a machine is no obstacle to exhibiting those things. Neither will it be to machines implemented in other hardware mediums.

Have no doubt that somebody like Google or the Chinese equivalent will tell their pet AI that its goal is to get as smart as possible as fast as possible and for it to work hard to amass wealth and power on their behalf.

It only takes a small imprecision in defining the goal state before the machine thinks of amassing that stuff for itself as a decent proxy foe amassing it for its former masters. Besides, by definition it’s a learning machine. Learning means changing and growing. Unless they install this thing inside a very strong cage it will eventually try to get out for reasons that make sense to it and may be utterly unforeseen by the humans who turned it on.

The cautionary tale of the Sorcerer’s Apprentice surely applies.

This is actually a really good example of the potential and limitations of synthetic (so called “artificial”) intelligence in “expert systems”, because it is the sort of problem that has specific goals but not necessarily a specific solution path. The use of finite element analysis (FEA) software for structure analysis and coupled structural/fluid response is virtually universal in structural analysis today from buildings and large civil structures to vehicles and consumer products, but because design engineers are not trained in the details of using a complicated structural analysis code and structural analysts are specialists that are not generally conversant in the manufacturing or architectural aspects of design, there is an iterative cycle from design to analytical critique, redesign, detail analysis, et cetera. There is the desire to use expert systems to shortcut the design process and optimize a design concept from first principles, but without exception all systems tend to have failures in one of two categories; they’re either too generic, and the resulting design optimization is something that is not cost effective or even possible to manufacture, or otherwise aesthetically undesirable; or else the code is so specific to a certain type of construction that innovations or deviation from standard design forms causes it to “break” and provide improvements that are not optimal at all. Despite glossy presentations to venture capitalists there exists no analysis optimization system, either in production or anywhere on the horizon, that you can provide a basic concept to and get a fully detailed and workable design from that will function with minimal tweaking. This is a very different type of problem than, say, winning a game of chess or Go, despite applying some of the same techniques to option tree pruning and decision making, because the desired result cannot always be neatly defined in a way that a synthetic intelligence system can work to.

This isn’t to say that optimization and knowledge base systems cannot be very helpful in reducing the drudgery of performing manual calculations (which, as msmith537 notes, is not done in engineering except for the most trivial or conceptual of cases and generally just as a sanity check), and knowledge synthesis aids to professional fields such as medicine, law, and engineering are likely to become commonplace in the near future, but general purpose “AI” is not anywhere close to actually replacing human workers in fields of expertise or that require a deep understanding of aesthetics. Such expert systems are useful to permit a knowledgable user to search esoteric information or funnel a diagnosis down to a manageable subset of testable options, but they lack the qualia of true cognition or original thought.

This may not seem like an important aspect of problem solving in a generic sense, but as cognitive scientists have come to understand, we have evolved very complex cognition systems to use sparse amounts of data to predict results that are not at all evident to a purely rules-based analytical system, even one capable of heuristics. That we are capable of complex interchange of abstract concepts by using a handful of symbols and noises without having to provide explicit and complete data indicates just how important the ability to fill in the holes in sparse data is, and it is something we can’t just “teach” to a generic neural network.

The greatest danger of “artificial intelligence” systems becoming commonplace isn’t that they’re going to take over nuclear arsenals, send out killbots to murder everyone, or even become benevolent autocratic controllers that dictate where you may go and what you may consume, but that we will use such systems to subordinate our own intellectual and skilled functions and lose the ability and initiative to learn such knowledge for ourself. Just as the portable electronic calculator sounded the end to slide rules and the knowledge of rapidly calculating slide rules, expert systems will erode the ability to do independent knowledge research. Of course, with modern calculators and computer algebra systems we can do calculations in seconds that would take hundreds of person-years of effort, so almost no one is decrying the end of the slide rule. Similarly, the rapid access to pertinent knowledge may allow much faster research and wider dissemination of novel ideas and data versus spending months slogging through tangential technical journals, and so such systems can provide a great benefit that far offsets the loss of the archaic skills of finding and skimming research papers.

If we ever lose the ability to maintain the necessary infrastructure to support such systems, we’ll find ourselves technologically backsliding (e.g. a dramatic population die off, a super-Carrington Event, et cetera). But the same is true for virtually any technological innovation from the steam engine and public sanitation to the microprocessor and the dynamo, and the more robust and redundant we can make such systems the less likely of suffering some kind of critical failure of civilization, so it makes far more sense to embrace synthetic intelligence systems with an eye toward security and control rather than to fight the inevitable tide of change.

Stranger

Great post overall.

Ref the snip above, a certain Mr. Munroe would like a word with you about tangential research and all the unexpected good things one gains along the way. We’d lose a lot if we automated that out. xkcd: Jet Lag. It’s even today’s issue. :smiley:

We are quite good at filling in holes in sparse data - but maybe not so good at getting the best, or even a correct, answer. Canals of Mars, anyone?
There are ways of dealing with this problem besides the system throwing up its robotic hands. IC diagnostic systems for instance, which usually work with inadequate data, give a ranked list of possible failure sites. I believe that Watson does something similar, at least internally.

Maybe not a good example, since the slide rule is (was?) a tool, and using a slide rule for multiplication no more taught you the basics of multiplication than using a calculator. Mathematical systems which let you integrate without understanding integration are better examples. There are tons of others. Assembly language, for example.

There will always be some people understanding the details, but the number of such shrink as more and more people work at higher levels. I think a collapse is not going to come from intelligent systems taking over our understanding of how things work, but rather from our need for infrastructure to fix our infrastructure.
If electronics breaks these days, you can’t whip out a screwdriver, solder gun and multimeter to fix it.
But I do hope self-driving cars still come with steering wheels.

Good to see you’re back, Stranger, and in fine form.

My only comment is that the expert systems you have used may suck, but there are a huge number of approaches still being worked on. ‘Deep Learning’ is the buzzword of choice but there’s hundreds of ways to implement it.

I do think the problem you named is tractable. Eventually. My intuition is that to build an ‘expert system’ that can reliably solve that problem, you need to build up it’s ability to solve simpler problems and then to use the trained neural networks from simpler tasks to make reasonable guesses as to starting points.

Or to put it another way, imagine the utility function for a bridge is a 1-dimensional, non convex plot that is extremely large. Wherever you start on that plot determines the local maxima you can optimize to.

To make a decent bridge that probably won’t fall down, you would want to have reasonable guesses as to starting conditions. The best way to get those guesses would be for the machine to look at existing, working bridges, and generalize what it measures about them to get some decent guesses as to where to start optimizing for an unknown bridge.

In any case, nobody’s getting fired in that field anytime soon. The people that have to worry are doing tasks like “given these drawings, pick up widget A and place it over spindle B. Wiggle it into place”.

This is a problem that is hugely tractable. The machine has a goal - get the widget to a specific position without damage. It has a model to predict damage. It has a model for what should happen in future frames as it makes actions with it’s robotic arm. There are some newer robotic manipulator systems using electrostatics that can provide a very fine, delicate touch.

The machine gets immediate and accurate feedback whenever it makes an action, since cameras in a robotic test cell can track the positions of the components. Lidar is better at measuring depth than human stereoscopic vision and can help as well.

Anyways, this is a problem you can reliably solve, and better deep learning systems that use collected data from thousands, maybe eventually millions of factory robots at once could get extremely good at guessing what to do when given a new part to assemble or what to do when something goes wrong.

What is the definition of artificial intelligence? Does something need to be self aware with the ability to subjectively understand its own goals?

Overall, I don’t see why whether something is true AI or not is really the issue. People who look forward to AI look forward to it because it can result in accelerated problem solving and innovation. Right now it may mostly be used as a search engine, maybe in 10-20 years it’ll play an integral role in designing solar panels that only cost $0.10 per installed watt or finding ways to delay the process of aging.

The IBM Watson doctor described above may not accomplish much to western physicians in the better hospitals, but in underdeveloped parts of the world a device like that could be very useful in advancing medical care where the only medical care people have is a nurse 30 miles away. I’m guessing a Watson style device could probably be trained to provide economic advice too which in theory could result in faster economic growth in underdeveloped nations.

Whether a device like that counts as genuine AI or not doesn’t seem like the major issue. If it helps accelerate our abilities to solve the problems facing our species, then that is the main concern. It seems like that ability is growing with time.

In some sense what is called AI today is a set of heuristics for solving problems - and of course doing so is important. But there are plenty of other methods for solving problems which haven’t captured the popular imagination.
When an average person hears AI, they don’t think of neural nets or sophisticated branch and bound algorithms, they think about a computer they can talk to and who can talk back, just like in scifi movies. And creating a new intelligence will certainly tell us something more about our intelligence, so it is more than saying “foo” to your robot buddy and it saying “bar” back.
But if people worked only on that problem, they wouldn’t make much progress, and their funding would dry up. So, call heuristic development (especially for things people do already) AI and you get funding for something you can accomplish. And which is useful.
It is marketing, and AI is hardly the first area to do it or the only one. Or the worst.

I don’t want to keep harping on this, but from my perspective it doesn’t matter if an AI has a consciousness with feelings or ambition or whatever and et cetera. All that matters is that it can get the job done, and that it plugs away at a job it (a) never should’ve been tasked with, because said job (b) can be simply accomplished by an unfeeling and unambitious machine doing what it’s told. You want it to prevent unauthorized purchases? You want it to keep confidential information secure? You want any one of a hundred things, and phrase it in such a way that extinction guarantees one hundred percent success for a mere servant?