Future of AI

At first I thought you had the figure and the book wrong, so I looked it up; turns out I was thinking of this incident

I found The Deadly Stamp Collector to be a fairly good talk about the dangers of an AI’s goals not being sufficiently steeped in human concerns.

ETA: Wow, really good post there, Stranger!

Good point ^^. If you want AI to function within the parameters of law, those will have to be defined by humans to guide the machine. Law is an established state religion, created by and for its priesthood (lawyers and judges), founded on faith with no empirical basis.

What do we do if the machine says a certain race is inferior?

ETA: Going back to Waldo’s most recent post:

I’m reminded of a portentious scene from Colossus: the Forbin Project:

Spoiler: The General goes ahead and tries. Colossus is *not *amused.

What are my thoughts on AI? Meh, wake me up when a robot can qualify for a boat loan, then I might have some thoughts.

It’s not going to happen, AI’s are smart enough not to buy boats.

That’s a valid concern. In the film Idiocracy, the premise was that humanity became stupid because of dysgenics. But far more likely, they could have become stupid because their world was so automated they didn’t need to think. Recall how even a “tard” could live a “kick ass life as a pilot” and corporations would automatically lay off millions of workers.

I’m actually doing some research for my firm on workforce automation and a major risk is firms losing process knowledge after they automate their business processes. These tools don’t update your processes for you. You still need people to tell them how the business runs.

Won’t documentation be sufficient? For quick and easier understanding , videos or PowerPoint presentations could also be used…

I did have a bit of insight recently. It’s far too complex a set of ideas to really explain in this message board post, but in essence, what it boils down to is :

a. We may see AIs with superhuman performance at tasks with predictable outcomes far earlier than we see one that can actually ‘talk’.

Such tasks include : Driving cars. Moving around warehouses. Examining products for defects. Resorting items in warehouses and reshelving. Every step of from the mine to final QC in modern industry. Prediction of new mechanical product’s performance. Prediction of new electrical product’s performance. Design of new mechanical and electrical products. Medical diagnosis. Rational development of new medicine. Reliable and accurate understanding of cell biology. Medical treatment including surgery.

All these tasks I mentioned above, sorted more or less in ascending order of difficulty, have quantifiable, measurable, and predictable results. If the machine tries to develop a new product that meets certain design parameters, it can measure how close it actually got. It could experiment on cell samples massively in parallel to get a more reliable and accurate ability to manipulate biology than humans have. Even something like surgery, as horribly messy as it is, with things like leaking blood occluding what’s going on and every tissue having a different shape from person to person, has reliable and predictable measurements and outcomes. (though you’d want to have that AI surgeon practice on a few million animals first…)

Talking to humans…our language is so imprecise and ambiguous. Right/wrong answers aren’t even reliably measurable because humans lie!

Sure, but the point is that an expert knowledge system can actually help you connect the dots between different fields of knowledge (or at least put you on the trail of it) without you having to personally spend time skimming through and trying to understand disparate fields of knowledge. This isn’t even a difficult task in the primitive case of something like playing a game of Wiki-Link; one of the standard introductory data mining challenges is to write a script to automatically find the shortest connections between two topics on Wikipedia or some other online knowledge resource. If you are a researcher working in the neurophysics of neurotransmitter interactions and want to see what the latest research is on quantum biochemistry, you could take a few courses on quantum chemistry, dig through a bunch of technical journals and arXiv papers, and bug your colleagues down in the molecular biochemistry and quantum physics groups, or you could prompt your “expert system” to give you a survey of recent advances pertinent to your question, and then go ask more pointed questions of your colleagues. In either case, the system isn’t going to understand things for you, but it may help to focus your inquiry while also providing additional information, and in doing so, free you from having to do a lot of irrelevant research while pointing out things that might be actually interesting.

That’s absolutely true, but Lowell’s canals were not universally adopted (in fact many authorities and enthusiasts disputed the notion for various reasons) and they spurred on a lot of useful discussion that directly lead to the fundamentals of planetology as we know it. As difficult as it is to imagine now, before Schiaparelli and Burton’s observations all planetary bodies were assumed to be essentially featureless (save for Earth’s Moon) and there is virtually no speculation about variation or planetary climates. By the 1950s, planetary scientists had broadly worked out the environment of Mars, although there was still hope that the seasonal variations represented some particularly hardy form of surface life, the hopes of which were dashed by the first Mariner missions.

Sure, but my point was that the slide rule was an indispensable tool for performing complex calculations prior to the electronic calculator, and aside from memorizing the logarithm tables there was a whole host of tricks and techniques which are now virtually unknown because they have essentially no utility in computer algebra systems. It isn’t just knowing how to work the slide that is gone; it is the training and discipline behind performing calculations in that way at all.

I agree with the position that a failure of maintaining infrastructure is the most likely patch to regression in industrial and knowledge capability, but key to maintaining that infrastructure is the knowledge and skills of how to build things, and build the tools that help you build things, et cetera.

Oh, gods, documentation and training by PowerPoint is the bane of my existence. If you have people working at a job that is so simple that they can learn to do it by going though a slide deck or watching a few online videos then it probably didn’t require that much skill to do to begin with. Those mediums can only contain so much information and don’t provide any kind of genuinely interactive teaching or knowledge transfer. (No, not even the ones that stop and make you perform some kind of multiple choice test after each module.)

I suspect the kinds of jobs that msmith537 is referring to are bureaucratic and clerical type positions which, while practicable for automation, require an often arcane knowledge of procedures and policies (both written and informal) in order to make them work correctly, and may be domain or site specific, e.g. depending on individual worker knowledge on “how things are done”. Trying to impose a monolithic set of policies may disrupt the flow of that process and without an understanding of what changed and how it can become very difficult to figure out how to repair or modify the newly automated process system to work. I’ve seen numerous well-intentioned attempts to have management conslutants and system architects come in and create and enterprise-wide system for various purposes such as product lifecycle management (PLM) or organizational management and performance metrics (OMPM) systems without really understanding how the system would be implemented and used at the basic user level (or how the resulting statistics and metrics would be “juked” at the management level to meet performance expectations), resulting in a charlie-fox of tens of millions of dollars in wasted effort and frustration by consumers of the system.

Stranger

What is the place of theory and experimentation in the process of AI? Can AI even generate a theory, and how does it test the theory?

Let’s take a theory, say, immersing children in ice water for long duration leads to astounding intellectual achievement, without significant social or psychological consequences. How does an AI machine even dream up such an idea? And from where does it get experimental subjects to test the theory? Or even find data sources to run simulation?

In fact, how does AI at any point along the way, gauge the social utility of anything it deems to be “the truth”, without a valid sample of humans who have been exposed to that reality, not to mention a sense of whether they liked it or not.

Future humanity will not be desperate for scientific and technological advancement, but for moral and ethical social advancement. I don’t see AI being very useful for that.

Many of these things already exist.

Done. Not perfectly maybe, but getting there.

Not sure what you mean, but if Amazon does not have this already it will have it soon.

Again, not sure exactly what you mean, but we’ve had stuff like this for year. For instance, when a circuit board is populated a camera system looks for bad solder joints of misaligned components. Electrical testing of components has been automated for decades. The tests applied to microprocessors and other ICs are automatically generated, using heuristics, not AI. (AI can do it, but not as efficiently as the best algorithms.)

If it can be done, Amazon will do it.

Way too much interaction to be totally automated, though factory automation systems which schedule product flow and do ordering is some of it.

Don’t know about mechanical stuff, but you sure as hell predict the performance of a microprocessor before you make it. And you do so all the way from the architectural level down to the circuit level. Again, through simulation not AI.

Creative design, no. Detailed design, already happening, since the detailed gate level design of digital logic has been automated for a long time, to the point where I’ve met with designers who never had to read a netlist. (Another example of Stranger’s point.)
As for the rest, you use understanding in a way I don’t understand. A simulation model of the cell - maybe. True understanding? It’ll take a while.

I’ve heard this problem before - like 50 years ago when the consultants came in with their spanking new IBM 360 to automate work flow in an office.
Not to mention Tracy and Hepburn in Desk Set 10 years before that!

Agreed - as far as I understand it (from listening to the people actually working in the field of machine learning), any General Intelligence AI that is functional and useful is pretty much guaranteed to be risky.

Here’s a rather interesting thought experiment about an AI which is given the task of collecting stamps (the whole video is interesting, but the relevant part starts about 3 minutes in):

Tim Urban from the Wait but Why blog has a pretty good, lengthy write-up on the advent of AI: The Artificial Intelligence Revolution

I can understand. I guess in theory, it can be done but it will be very lengthy and boring task plus one has to update the documentation for any small change in the thing that was documented.

Thank you for linking this.

It is indeed quite long, but it is well researched, annotated and written. Even tho it’s over two years old, the core concepts are all still valid and in play.

I highly recommend this article for anyone participating in this thread; the wealth of terminology and background can only make this discussion more meaningful.

My working life is directly involved in using machine learning, predictive analytics, neural networks, and “AI” to solve problems or drive various business impacts, so I feel qualified to comment a little here.

I think there’s really two problems stemming from AI that people are worried about, and that these two problems get conflated sometimes.

The first problem is jobs and capital owners, as mentioned by LSLGuy. In the short term (last 5 to next 10 years), the techniques and tools are getting better on a pretty steep curve, and vast new classes of previously too-complex problems are opening up as amenable to solving. This is probably good in the aggregate. It will give us some great things like self-driving cars, better supply chains and logistics, better medical diagnoses, better operations, and a bunch of other stuff that Stranger and SamuelA and Wesley Clark among others mentioned earlier. But it will also eliminate vast swathes of jobs, both white and blue collar, because jobs that were formerly too complex either in 3d space or in terms of knowledge background and business processes will suddenly be amenable to automation and expert systems. This is happening today, and it’s part of the business impact that me and my team drives, and that lots of other data scientists and consulting firms are driving. And this is only going to accelerate in the next 10 years, because it literally drives today and can drive tomorrow tens and hundreds of millions of value annually for large companies, and as the solution space expands, that value expands accordingly.

Economically, the folk that are paying for this automation are capital owners, whether individuals or corporations, and all financial benefits from these jobs being automated away are going to go to those capital owners, while all the problems from vast swathes of the newly unemployed and unemployable-at-a-living-wage folk whose jobs no longer exist are going to be socialized to all of us. Although I have my doubts and reservations about this problem actually being solved given our current political landscape, this is at least a solvable problem, in the sense that higher taxes and a Basic Income could theoretically address this.

The second problem is unfettered general “strong” AI with physical execution capability, aka Skynet. For those of us who enjoy reading authors like Peter Hamilton, Neal Asher, and Iain Banks who postulate pan-galactic post-scarcity societies with strong AI’s of all levels peacefully coexisting with humans and other intelligent species, it has likely occurred to you that the general theme of active AI benevolence towards humans and organic creatures is an absolute requirement for those societies and stories to exist; if it didn’t, humans would have been wiped out either intentionally or as an unfortunate byproduct of some larger project, given that these are godlike meta-beings whose motivations and thought processes have the same relation to us as we do to paremeciums.

This second problem is the harder one, because there’s really no way to guarantee that any strong AI created will indeed be actively benevolent towards us, or even that in the aggregate there will be enough benevolent ones vs antagonistic or indifferent ones to ensure our survival. And per LSLGuy, because there’s such huge advantages available, it is a nigh-certainty that if it is possible to create strong general AI with physical execution capability, it will be done somewhere by someone. And at that point the cat’s out of the bag, and we just have to hope and pray.

Although this problem is a lot further out than the first problem, I think it’s ultimately the harder problem, that doesn’t really have a solution even theoretically as we currently stand, barring vast changes in human nature and societal organization.

I suppose it’s possible to thread that needle with a long enough time frame prior to it happening and enough interstitial steps of progressively augmented humans and the different societies they form, so maybe that’s what we should be hanging our hats on. Everyone be sure to invest in any mind-machine interface companies as they come up, because that’s one eight ball we should all try to stay ahead of!