Could a computer fitted with AI software take an IQ test?

Humans aren’t really intelligent, either, we just give the illusion of it. To prove it, ask any random human for the integral of sin^2(x) e^(-x^2)/ln(x+1), from 1 to 2, to four decimal places. Our capability is extremely “brittle”, and breaks down as soon as it encounters even a simple problem like that one.

Or, in a sense, all of our processing is spent on being able to pick out the parts of that question, through any possible medium that you could ever invent - so long as it’s readable by our varied senses. Answering the question is a tertiary concern of our machinery.

Yet humans have designed machines that can quickly give the answers to questions like the above while I don’t think machines are yet capable of designing anything that can give answers to the questions they struggle with. We still win, if only because anything a computer ever does will be because we designed them to do it.

By that argument, one could argue that single-celled archae are even more intelligent than humans, because they were able to develop things that could think better than them, too.

That’s a puzzler. I’d vote ii.

I agree with both posters. Watson, as impressive as it was on Jeopardy, was constructed to handle a specific, limited style of questions & answers. Although I think AI will get there before long, it isn’t sophisticated enough to handle natural language inquiries outside of the designed framework just yet.

I find it interesting and encouraging that Google comes pretty close.

FWIW, I’d go with “unruffled,” because the others can all be permanent states. “Unruffled” implies a reaction to some stimulus, and begins to lose its meaning shortly after the stimulus is removed.

I’m sure you are bright enough to recognise the flaw in your argument but I’ll point it out anyway, your single-celled archaea didn’t consciously design anything.

Not a good rebuttal, considering that the debate is about what counts as “consciousness” to begin with. I can just as well rebut that humans didn’t consciously solve that integral, either. If the capability to develop a system to solve a problem counts as the ability to solve that problem, then archae count as solving all of the various human tasks. If such capability does not count, then humans don’t count as solving ugly integrals.

Watson is actually a very general AI. The Jeopardy! thing was just an extended commercial for an IBM product. IBM touts Watson widely as a general AI problem solver, selling access to it to a variety of customers.

“Solving Natural Language” was hyped by AI researchers as something that was solvable in “5-10” years going back to the 50s. And it was always “5-10” years away.

I served as the non-AI prof on some NL grad student’s thesis committees. Their thesis proposals were full of promises. The actual end product was virtually no advance at all, but suitably hyped, of course.

But in the last few years real progress has been made at an astonishing rate. I’ve gone from skeptical to amazed.

The latest AI beating humans story, courtesy of Slashdot, is that an AI can do really well at Super Smash Bros. Melee. OMG, they’re taking our jobs! :wink:

Watson is a narrow, or “weak AI”. It cannot perform general intelligent action, which is how strong AI is often described. The head of the Watson project was Dr. David Ferrucci, whose background was in expert systems or “narrow AI”, and he brought this orientation and focus to the Watson project. This is all described in the popular-level book “Final Jeopardy”, by Stephen Baker: Final Jeopardy: The Story of Watson, the Computer That Will Transform Our World, Baker, Stephen, eBook - Amazon.com

Watson was not a generalized Question Answering (QA) product which was tweaked to play Jeopardy. It was a tightly focused effort to win that one game scenario, with hopes it could be later broadened for other purposes. It was, but it’s still essentially a customizable QA product: IBM Watson

The reason Watson cannot answer generalized questions like I posted previously is it’s essentially a narrow QA system which can be tailored to other narrow fields. But this does not make Watson a generalized AI problem solver.

Within Watson’s narrow zone of expertise, it is can seemingly produce eerie, almost human-like responses. However this is a narrow-AI “sleight of hand”. Go outside that pre-programmed zone, and Watson’s illusion of intelligence falls apart.

As described in the book Final Jeopardy, Watson is less a breakthrough in cognitive science than an engineering triumph. It is an improvement over traditional QA systems but it is closer to Google than HAL-9000. Watson is a very long way from passing either a true Turing test or IQ test (the OP question).

A full Wechsler Adult IQ test has several hundred questions and takes 60-90 minutes for a human. These are not “easy” questions like Jeopardy, which were amenable to purely linguistic processing and lookups. Anyone can try a small subset of the Wechsler Adult IQ questions on this site. Ask yourself how likely it is Watson or any other computer in the near future could do well on such a test:

http://wechslertest.com/

Except I didn’t say they did solve it, I merely pointed out that they’ve designed something that can.

Are you really trying to tell me that single celled archaea evolving into humans is equivalent to a human designing a computer? Really? Have a good hard think about this before answering.

It is equivalent in some ways. If you’re going to claim that the ways in which its equivalent aren’t significant, then the onus is on you to clarify the differences, and why those differences are important.

Those questions are actually not found on the WAIS-IV but are more the kind you would see on verbal portions of the SAT or GRE. They do measure verbal and logical reasoning abilities.

The Wechlser scales do have verbal components but they do not comprise the majority of subtests. There are several other, visual/hands-on type tasks that an AI system can not access (currently).

Remember that the answers to these things must have a number of properties, one of which is clarity of outcome. One’s (possibly) idiosyncratic sense of the penumbra of meaning surrounding similar words is not going to get there.

Another property is that the solution must be “strong” - mere bagatelles like “only serene ends in “e”” or “only calm has four letters” are arbitrary, not robust in the sense I am talking about. They might literally be true, but nothing of consequence flows from those types of answer, and the test-taker is supposed to recognise this. This is not about discovering “gotcha” solutions.

For mine, it is unruffled. All the words have sufficiently similar meaning that meaning is unlikely to be the issue. The difference is structural - unruffled is created by a negative reference to something else- it is defined by opposition, so to speak. The others are structurally affirmative words for passive states of being.

My 2c.

Why? You’re the one claiming the equivalence. I was comparing a humans ability to design a tool that can answer questions they can’t easily answer themselves, to a computer’s inability to do the same. If you want to show that some computers can do that then I’m all ears, that would be interesting info. The ability of a single celled organism to evolve into a human is not very interesting though in the context of this discussion, thanks all the same.

An excellent link. Really supports my views. Thanks. (Did you actually read that page, btw?)

Yes I read the page, also every word of the book “Final Jeopardy” which describes the entire Watson program. That page in no way supports the statement that “Watson is a very general AI”. Watson is a narrow QA expert system. As other AI researchers have explained, Watson is not capable of answering general questions over wide knowledge fields or passing an adult IQ test, much less a true Turing test. Watson does not even have (and cannot simulate) the cognitive ability of a 4-yr-old child. This is obvious from examining the previously-posted questions or a Wechsler Adult IQ test.

So while technically the answer to the OP question YES – an AI computer could TAKE an IQ test – it would not come close to “passing” the IQ test. Those IQ questions involve a complex mix of visual/spatial reasoning and verbal-linguistic deduction.

Watson was able to make a brute force “end run” around those hurdles by virtue of the narrow, highly stylized Jeopardy format. It was very impressive, and I recommend anyone who didn’t see it to watch it on Youtube. However Watson is not remotely capable of “passing” an adult IQ test or true Turing test.