Is Making Intelligent Computers Such a Good Idea?

Can Watson do anything besides answer Jeopardy questions?

“Computers are useless. They can only give you answers.” –Pablo Picasso

Wolfpup,

Given:

At some level of complexity all intelligent systems are yielded from unintelligent components

The converse is not valid:

At some level of complexity all unintelligent components will yield intelligent systems

So, we can identify 3 categories of unintelligent components:

A - unintelligent components that do not have the potential for yielding systems that are intelligent

B - unintelligent components that have the potential for yielding systems that emulate intelligence

C - unintelligent components that have the potential for yielding systems that are intelligent

Then the following arguments are valid:

  1. Intelligence is not an emergent quality of complexity for systems constructed of category ‘A’ components

  2. The ability to emulate intelligence is an emergent quality of complexity for systems constructed of category ‘B’ components

3, Intelligence is an emergent quality of complexity for systems constructed of category ‘C’ components
Current AI, and the LEGGO roundworm, are covered by #2.

Quite a few things

https://www.fastcompany.com/3065339/can-ibms-watson-do-it-all

I’m a total novice at AI, and computer science isn’t my background. But when people talk about AI sometimes they talk about narrow AI or general AI. Narrow AI is like cruise control or playing a managing a subway system. The AI can do a good job at that, better than a human but it can’t do anything else.

General intelligence is when an AI can do the endless thousands of things that a human being can do.

I think we are in some kind of in between age now. Devices like Watson and DeepMind aren’t exactly narrow, because they can both be taught to do a wide range of things. They can play a wide range of video and board games. Give legal or medical advice. Teach classes. Manage the cooling system in server farms. Drive a car. Invent recipes. etc.

I’m wondering if we are in the age of something like broad-narrow AI. AI that can be taught to do hundreds, maybe thousands of things, but not the endless tens of thousands of things a human being can do.

Wesley Clark,

Good observation. As some have pointed out above, we have entered this discussion without any definitions.

If we eliminate self awareness the definition of intelligence gets easier:

The ability to do is Automation. The ability to adapt is Intelligence.

I agree with your combination concept, but I believe the current ratio is: AI = .9A + .1I

I always assumed (as an amateur with no background in the field) that a definition of intelligence would possibly be something akin to

‘the ability to engage in intentional, goal oriented behavior in a hostile environment’

Which means that you have goals that are not achieved naturally in your environment, so you have to understand your goals and understand your environment, then innovate solutions to make the environment one where your goals can be achieved.

Bacteria engage in goal oriented behavior, but it isn’t intentional.

A definition like that wouldn’t require self awareness. Google isn’t self aware but it helps me achieve my goals. Then again google isn’t intentional either but I am.

I don’t know the definition of intelligence. Honestly, I don’t think its that important. What is important is the fact that the problem solving capacities of the human race improve due to AI. If that happens, then AI is a success no matter the definition.

An AI that was basically an oracle, you ask it a question and then it innovates a very brilliant solution would go a very long way in promoting human problem solving abilities, even if the AI oracle had no self awareness or goals of its own. If anything that would be preferable, because an AI with its own goals may not mesh those goals with human goals. You could ask it ‘how can I improve crop yields by 20%’ or ‘how can I improve survival rates for pancreatic cancer’ and it comes up with very innovative ideas.

Wesley Clark,

How about an altruistic computer that provides instant political oversight: Stuff like instant fact checking of all utterances by all politicians, instant summation of all statistical references, constant display of all contacts to and by each politician - stuff like that streaming on screen while they are speaking.

Have the computer design the American History curriculum for elementary schools - based on the information in it’s data base.

Have the computer evaluate our economic and legal systems using the preamble of the Constitution for a guide.

Have the computer decide what to teach elementary school students about religious institutions.

OH yeah - have the computer compare the economics of single payer against any other health system.

I agree. It sounds like you’re now on board with the idea of intelligence as an emergent property.

There is no difference between (B) and (C). Absolutely none.

Since the previous argument was false, this is false, too. Specifically, again, there is no difference between (2) and (3). You can see that in the fact that the LEGO robot behaves just like a roundworm because its computer contains an exact copy of the roundworm neural connectome. It emulates the neurons, but it doesn’t “emulate” the roundworm’s brain – it IS the roundworm’s brain. Futurists like Ray Kurzweil believe we’ll be able to do the same thing with a human brain well before the end of the century.

I more or less concur except that I don’t think it’s useful or appropriate to define intelligence in a way that requires broad generality. It should be defined in terms of its capabilities in any given domain, and if that sounds vague, it’s meant to be, because intelligence is a continuum, not a threshold. Thus I would not consider cruise control to be intelligence by any reasonable definition, but a self-driving car which may often have to make complex decisions with imperfect and possibly contradictory data would be. In between we have things like Tesla’s autopilot feature, which is not clearly on one side or the other – a good example of this continuum.

Right. They have other applications of the technology, but from what I’ve read they aren’t making any money with it.

A real achievement, but not quite what has been described.

I went to http://openworm.org/ and was offered a worm simulation program for sale. You can get details if you cough up some bucks for their software. Not sure if this is real or just a sales gimmick.

A neural net consists of nodes, weights and an interconnection pattern. The interconnection pattern and position of the nodes for the worm were known, so the configuration was modeled in software and published on the web. Each interconnection was given an ISP address so that anybody who pays their dues can create their own set of weights and run a simulation to see how the worm responds. Somebody put something in a robot and it has some responses to some inputs. The significant thing is that the process depends on the analog weights not the computer program. This may be less than 1% of what happens in the worm, but it’s a start.

I assume this is based on serious research. That would consist of placing the configuration in existing software and training it with assumed inputs and outputs to see if it would converge on a useful set of weights. I’ll keep looking.

You read it wrong. Openworm is an open source project, completely free AFAICT. The only references to money I could see is the opportunity to make a donation, and an unrelated page listing monthly fees for collaborative services on Github.

I have no idea how good the science is, but it does seem fascinating as the intent of the project is to simulate the entire cellular structure of this roundworm, not just the brain. The brain is just the first part that’s been done; having only 304 neurons its full synaptic connectome was fully mapped out, AIUI.

That’s a nonsensical statement. The computer program and its digital data is all there is. There are no analog components. To the extent that there are some analog-like neural activities in the brain, these are being simulated, bearing out exactly what I said earlier, to wit: the generality of a Turing-equivalent digital computer allows it to simulate any analog process to any arbitrary degree of precision. This is precisely why asserting any kind of inherent limitation on the theoretical information processing capability of a digital computer, including the achievement of intelligence and sentience, is a baseless and nonsensical assertion.

I’ll add as a side note that the brain is neither fully digital nor fully analog, but has characteristics of both. The digital aspect is that much of the brain’s activity is based on the count of action potential spikes in a given time interval – the spatiotemporal pulses of the neural code, and the fact that a neuron firing or not is a binary decision. The analog aspect is that the timing between individual spikes, akin to the frequency of a radio wave, can also be a critical determinant. The point, at any rate, is that any brain can be fully emulated in a digital computer once we have all the connections mapped out, and given sufficient computer capacity. We’ll have some interesting ethical challenges once it becomes possible to create a complete working human brain in a computer.

Wolfpup,

The weights in the worm’s neural net are analog conductivity values. Probably more than just resistances. They were entered into the software as numbers.

OpenWorm is serious science at the High School level.

You are correct. A digital computer can emulate the operation of the brain.

Specifically, the neural net was modeled using the Hodgkin-Huxley equations. The point being that all aspects of the brain’s neurophysiology, including those that have analog-like behavior, are perfectly amenable to implementation on a digital computer.

I take it, then, that we have put to rest this previous claim of yours that it wasn’t possible.

Furthermore, the other point I was making is that a neural net is not the only way to achieve intelligent behavior. To draw on my favorite analogy again, we don’t build aircraft to imitate birds, we build them in radically different ways to suit specific purposes, and none of them look or function like birds. Similarly we can construct algorithms and heuristics that run on digital computers and create desired behaviors and from which intelligence emerges but whose underlying processes are very different from the human brain.

Why did they need Hodgkin-Huxley to model a net that already exists? The name dropping in post #152 doesn’t seem to comprehend the process.

The statement in #57 “There is no equivalent in current computer architectures.” is true. Numerical computers do not approach what was described in #57.

I have never questioned that a digital system can emulate the entire human brain if:

The human brain is fully understood in it’s smallest detail of configuration and function; the funding and skills exist to construct such a system; there is sufficient time left before the sun swells to consume the earth for the system to complete it’s first pass through the program.

Of course, that’s emulation, not thinking.

BTW: digital computers are constructed of analog components.

Eh, what? The incomprehension here is not mine. :rolleyes: The thing that already existed was the mapping of the connectome on a piece of paper! The Hodgkin-Huxley equations provide the algorithms for creating the working simulator by describing how to actually properly model the neuron interactions:
We have then defined the connections between the NeuroML neurons using the c. elegans connectome. Because NeuroML has a well-defined mapping into a system of Hodgkin-Huxley equations, it is currently possible to import the “spatial connectome” into the NEURON simulator (Hines & Carnevale 1997) to perform in silico experiments.
http://docs.openworm.org/en/latest/Projects/datarep/

Just two points on this nonsense:

  1. The difference between creating a working model of a roundworm brain and a working model of a human brain is only a difference of degree. That difference may be substantial but it’s effectively the same process. At the current rate of technological evolution, many believe we will achieve it well before the end of this century.

  2. I’m astounded by that last sentence. If a simulated brain behaves exactly like a “real” brain, then functionally it IS a real brain. If a real brain thinks, then the simulated brain thinks. Any other interpretation of reality is the kind of Alice-in-Wonderland illogic that makes my head spin.

So what? Completely irrelevant, just as it’s theoretically irrelevant whether a digital computer is constructed of relays, transistors, ICs, or some fanciful organic matter. The digital operations arise from binary thresholds. I have a fully functional virtual digital computer that is emulated on my desktop computer. There are no analog components in the emulator. The result is exactly the same. The recurring theme I notice in these discussions is that you fail to grasp functional equivalencies and instead wallow in implementation differences.

Then my printer thinks and a bee swarm thinks.

Problem solved.

Well that sucks. Without a market for these devices then the incentive to research them goes down.

From what I can tell, DeepMind and Watson both have the ability to do fairly broad things, but I don’t know if there is much of an economic incentive for these devices yet. A lot of the stories about these devices are just PR stories.

I thought DeepMind was saving millions of dollars on the cost of cooling server farms though.

Yes, it sucks that pioneering efforts like that take a while to get going. “Get a horse” was the common refrain when the first cars came out and got stalled on the side of the road, but that doesn’t happen so much any more, and nor are there a lot of horses around on the roads. I think IBM is doing poorly right now in financial terms, but Watson and DeepQA were fantastic innovations, and so was DeepMind which at least is backed by the resources of Google (Alphabet).

This article implies Watson isn’t doing well economically because the market for AI is so saturated and Watson is too expensive.

If so, that isn’t too worrying. What is important is that there is a multi billion dollar market for AI, because without that the incentive to invest in R&D goes away. What company makes the AI isn’t important, just so long as it gets made. And without financial incentives, it won’t get made.

Sometimes fantastic innovations don’t make any money, definitely not enough to pay for themselves. That’s the way industrial research goes. I used to work at Bell Labs, I know all about this.
And sometimes smaller, spryer companies take the innovations and run with them. Innovation and money don’t go hand in hand.

Yes, and entirely different emotions than we as humans have ever considered. The kind of emotion that we don’t even know exists. You could also build an AI / computer that had these new emotions but devoid of other emotions;

i.e. no anger, depression, pride or whatever the programmer decides.

Couldn’t you programme an AI never to get angry?