Artificial Intelligence

I would love a factual answer to my question, preferably in small words (non sequential serial numbers please;)) regarding artificial intelligence, but informed opinions are welcome as long as they are stated as such.

Ok, so I was reading this article today and the last full paragraph makes some mention of AI.

My questions are;
Do we really expect a “human-like” artificial intelligence? By that I mean something that we could exchange meaningful communications with.
Given the purported size and apparently growing complexity of the “Total World Wide Internet”(my term), is there any reason to expect that a feral AI, at a human level of cognizance, and self awareness, wouldn’t happen in the wild?
Is there any reason to expect that an AI, even one cultured in a lab, would even be interested in communication with us?

im going to bed, this cold is kicking my ass

The intermediate stage of “modeled intelligence,” i.e., intelligence that resembles our minds because it deliberately imitates our minds, will probably come along as the breakthrough case.

This kind of AI will want to talk to us, because we’ll have designed it that way.

Before this, we’ll develop better and better “expert systems,” such as the Google search function or Siri.

Feral AI might come about if we assign a lot of machine design to our machines. The result might be alien to our way of thought, but who can guess? It might be surprisingly familiar.

That’s possible but I’ve always been skeptical of that approach, and it’s certainly not necessary in order to develop human-like traits and even consciousness. It may be more useful as an exercise in understanding human cognition rather than creating the artificial variety of it. Indeed if one accepts that human cognition is fundamentally computational (some don’t) it’s rather astonishing that one would think this a promising method of developing AI. It seems a lot like believing that in order to build an airplane, you must imitate a bird. It turns out that the more sophisticated airplanes became, the less and less they resembled anything like a bird, who are the way they are only because they’re evolved biological organisms made of meat and feathers and completely lacking jet turbines.

You’d have to be a lot more specific about you mean by “human-like”. We’ve long had AI-like systems that we can communicate with to get useful information via natural language, most recently ordinary speech. But if you mean a human-like conversation with something that appears to possess consciousness and can be interesting and witty, it’s inevitable but probably still decades away.

Yes and no. Yes, there’s good reason to believe no such thing could happen, because a sufficient level of processing and storage capacity is merely a prerequisite for intelligence, not the enabler. You can no more get intelligence to arise out of a lot of processor and storage capacity than you can get a car or a jet plane to arise by throwing a lot of parts and bolts into a big pile. What’s lacking is the necessary logical organization.

The other side of that coin is that we have a lot of sophisticated commercial and government systems that are more and more interconnected. We’re not just totally dependent on these things for survival, quite literally, but data mining and data aggregation is becoming so pervasive that it can lead to all kinds of strange and unpredictable consequences that no one could anticipate in a world that’s totally dominated and driven by information. This is nowhere near a “feral AI”, or AI at all, but it’s an unplanned synergy that is greater than the sum of its parts. To the extent that some of these component systems are AI, and may interact with other AIs, it leads to an escalating scale of unpredictability and potential loss of control. So in that sense there is some risk there, though it’s not quite in the sense you may have meant.

The concept of “interested” presupposes consciousness and free will. I think those are simply emergent properties of complexity and AI systems will eventually possess those traits, but presumably we’ll have created them in our image, so at least for a while we’ll have their interest and attention. But perhaps by then we’ll be evolving ourselves, too. Rather than continuing to exist as slow-witted meat-based organisms, we may choose to join our more elevated creations by becoming cybernetic hybrids ourselves.

Get an iPhone, it comes with Siri. You can talk to it, and it talks back. It’ll even have a snarky comment or two once in a while.

Yes, this is artificial intelligence. So are the computer-generated opponents in your favorite video game.

There are some chatbots that can engage in conversations that seem very human for good stretches of time, but if you probe them you’ll find out that they don’t really understand the world.

As for consciousness, how can we tell? If a chatbot claims to be conscious, that’s probably programmed in and doesn’t mean anything. But maybe it really has come alive. How do we know?

I don’t think we’ll see human-like AI if only for the simple reason that we have billions of human-like natural intelligences already and those are cheap, no need to pay good money for a knockoff.

See here for a Slate article from yesterday discussing AI in some depth. A truly self-aware AI that is “human-like” is not expected anytime soon, if ever. It’s a complex problem that is well-beyond anything we have envisioned.

So far, I’m not impressed with AI in on-line gaming. In online Monopoly, I can beat three opponents set on Tycoon,at least 90% of the time. In online Yahtzee, I win more than half, even though I refuse to ever score a Yahtzee for myself. My average score is 4 points below the computer, but I win more than 50% of games, without ever scoring a single yahtzee.

Having said that, I don’t know how hard the AI is trying to win. In Monopoly,even though I have developed an assured win, I believe the game is rigged to prolong the outcome, to allow more ad breaks before I wrao it uo. In backgammon, when the live opponent quits, the computer takes over, and plays like a five-year-old.

I think that, in order to create a human-like AI, you would need to add some special structure to the program, allowing it to loop ideas back on themselves, and develop a whole education system that allowed it to experience the world in a way that allowed it to understand what we go through, interact with it in some way, and presented it in a way that allowed it to learn like we do.

Maybe we could do all that, but there’s no strong reason to and some practical (e.g., it’s unknown what structures need to be incorporated) and ethical reasons not to.

AI as a fancy mechanical device that simply learns how to correlate things together, with limited guidance, is much better as a product and easier to create, without any hard ethical implications. It’s a better product.

No. The internet may be big, but so is the network of all roadways in the world. Unless you’re worried about the blacktop you drive on gaining intelligence on its own, you shouldn’t worry about the internet doing so either. It’s basically the same thing, it’s just a bunch of pathways for data to travel along.

A computer intelligence could certainly access a lot of information via the internet, and possibly (by developing computer viruses) infect other machines over the internet, allowing it to incorporate those machines as extra processing power for its mind. But that’s a different prospect. Someone would still need to develop the AI in the first place.

For a genuine intelligence to grow, it would need to be curious. Minus curiosity, it wouldn’t be able to become intelligent. Ergo, it would be a fundamentally curious entity and thereby curious about us.

There’s some chance that we wouldn’t really appeal to its curiosity, but given that we’re the most advanced being around, it’s unlikely.

In practical terms, AI is defined as “that which we don’t have yet”. Time after time, AI researchers and pundits have proclaimed that once we had a machine that could do X, we would have true AI. Then, not too long after, we got a machine that could do X, and the researchers and pundits said yes, but that doesn’t count, what we really need is Y, and so on.

I would maintain, though, that the current state of the art is what we’ve actually always wanted, not what we thought we wanted. In fact, it’s far beyond what we actually wanted. Nobody ever actually wanted a thing that could think like a human-- As iljitsch points out, we already have plenty of those. What we really want is something that can think unlike a human. The measure of an AI is not in how many ways it’s worse than us: It doesn’t matter if it’s worse than us at writing sonnets or painting pictures or whatever, because we can just do those things for it. What matters is in how many ways it’s better than us, and how much better than us it is at those things.

Every 1950s science fiction author envisioned robots that can do what a human can do. But almost none of them envisioned robots that can do what Siri can do.

First, anyone who understands how the web works knows that no feral intelligence would suddenly pop up, though I know that this is a common sf trope. The interaction of computers on the web is fundamentally quite simple and not something that is going to grow into AI.
I agree with Trinopus that we will design AIs to talk to us. But we should be clear about the two meanings of AI which have been considered the same.

I took AI about 45 years ago at MIT. While the ultimate goal was pure intelligence, the subgoals were to develop heuristics which could do a subset of the things that humans could do. I think there was a perception that if you stitched together these subsystems you’d have true intelligence. Much of the stuff we studied is now available - chess playing, equation solving, giving directions, speech understanding, vision. AI has succeeded in that arena. But in developing a computer with real intelligence we don’t seem to have made any progress at all. Yeah, Siri is better than Eliza. And a lot more sophisticated. But is our attribution of intelligence to Siri any different from how some people thought Eliza was intelligent? I’m not sure.
I’m not one of those who believe that there is a fundamental barrier to strong AI, just that no one has come up with any sort of method which can lead to it yet. It is still too big a problem, and solving the little ones is a lot easier and more likely to get you funding.

The real, fundamental difference between Siri and Eliza is that Siri is useful, while Eliza was never more than a toy. Does what Siri have that makes her useful qualify as “intelligence”? Maybe not, but does it matter?

To me, AI is the ability of a computer to comprehend its environment (reading documents, watching videos, listening, etc) and then use that information to engage in problem solving, and the more broad this ability the better.

In that front, it feels like we are on the cusp of an AI revolution. An AI that can read millions of documents, actually understand them, and then act like an Oracle of sorts where you ask it a question and it gives you the answer. Google already does this to a large degree. It is fairly easy to find an answer, or at least the right direction of an answer, within a few minutes of web browsing. In time that capacity will continue to get better.

Even now, google has featured snippets in the search function, which they didn’t have a few years ago. You ask it a question and google usually comes up with the right answer and posts it as a snippet.

There are robots that can learn by watching youtube videos, and I know various groups are working on improving the comprehension of AI for watson type devices so it better understands the millions of pages it is reading.

If you piled up a mountain of 2x4’s really really high, would it suddenly become a really nice house with a beautiful floor plan and great finish work? Or an awesome city?

Human (and other animal intelligence) is a complex set of very specific skills/capabilities/attributes that don’t just appear without some sort of “push” or guidance.

In our case (animals) the push was survival and competition. Every new valuable skill was added to the repertoire and combined with others over millions of years.

What is the push for the “internet” to gain these same types of skills?

Is having access to lots of text enough to cause it to reconfigure itself into a thinking entity? (I don’t think so)

Now if someone unleashed a group of entities on the internet that were programmed to replicate, compete and evolve, then at least you have some groundwork that could move that direction. But even then it’s possible that the specific conditions that would reward intelligence don’t exist in that environment so you may get nothing too interesting. (this is actually one of my favorite sub-topics in ai - what environmental conditions are necessary to allow the possibility of evolving human like intelligence?).

for those who think my questions were to vague, that is a function of my profound lack of understanding of the actual science and state-of-the-art in artificial intelligence.
Regarding the appearance of a feral AI in the wild, my thinking was that the size (and yes storage capacity) of the internet, and mostly the (growing) complexity of the internet, all the connections, big and small, would be what eventually trips something over into self awareness and intelligence.
Yanno, I think what I’m really asking about is just that, artificial self-awareness.
Good God, how would you even know?
**RaftPeople ** I seem to recall, back in the days of the print magazine, reading in Discover Magazine about some MIT student or other that was doing exactly that, writing these small programs who’s entire purpose was to “mutate grow and learn” or something like that and letting a few of them loose on the internet. Part of his doctoral research I think, on evolution actually I believe, but with strong implications for development of AI in (at that time) novel ways.

Nicely said, especially the first paragraph. Another way of interpreting this phenomenon is to say that we tend to dismiss as “real intelligence” anything for which there is a mechanistic or computational explanation, regardless of how sophisticated, useful, or actually intelligent the behavior is according to prior criteria. As Marvin Minsky was fond of saying, “when you explain, you explain away”. But take someone out of the 1930s and plunk him in front of a machine that plays chess at a grandmaster level, and you won’t have any convincing to do at all that this thing is intelligent. It’s a perennial moving target. Human intelligence enjoys an unwarranted special exemption just because we don’t fully understand its underlying mechanisms, but we continue to exceed its capabilities in specific domains through entirely different mechanisms – as in my bird vs. jet airplane analogy.

I agree with the comparison but that’s not the fundamental difference. The fundamental difference is that Eliza had no semantic understanding and was really just doing pattern matching and randomized table lookups, whereas Siri, just like IBM’s Watson, must have significant elements of semantic and context-based understanding in order to be useful. That alone elevates it one major step higher in the echelon of AI.

Yeah, but Eliza was never meant to be useful, and Siri has tens of thousands of times more code in it than Eliza did. (I think I saw the code in Eliza - surprisingly small.) The point is not utility, but our ability to fool ourselves into finding intelligence inside clearly non-intelligent things.

You are confusing data mining with understanding. Google is great at aggregating the intelligence of web surfers and linkers and giving it back to us. I’ve done data mining which has found connections that our experts hadn’t yet, But none of this even approaches understanding.

Someone from IBM published a paper about a self-modifying program - at the code level, not the table level - in 1959. Self awareness is the key, I agree, and the internet is not going to get there accidentally. GAs have some sort of fitness function which says which mutations are closer to the goal. Figure out one for self awareness and you might have something.

Now there’s an interesting question! This topic is at the heart of many of my own burning questions. I would really love to see what a non-human intelligence would make of our world as a data set.

That being said, I believe that we won’t get what you’d call a human-like AI until we add in some analogue of an endocrine system. The human perspective is indeed in large part composed of curiosity; but there are many other more or less subtle constituents.