Does intelligence stop mattering for problem solving beyond a certain point

How would that work? I have frequently used neural nets for object detection (as opposed to image recognition), but it doesn’t “just know”, it counts. For much larger objects, such as massive crowd sizes, I can think of various algorithms to help with estimation, but that isn’t a correct answer. I’m trying to figure out what it would take to “just know” for a large number of objects and can’t think of one that would be accurate without counting.

I think it’s a good example; if you look at one orange on the table, you don’t have to count it to know there’s one orange - it just has an instantly-knowable unity to it; likewise with two oranges side by side - you don’t count them one-two - you just see a pair; same with a trio (regardless if they are in a row, a stack or a triangle) and maybe also a quartet or quintet. There is of course some function equivalent to counting probably happening, inside the brain, but it’s not a conscious function - it’s just a feature of our own level of intelligence, but our intelligence is limited to instantly recognising fairly small numbers without explicitly counting; perhaps a higher intelligence would just have more of those instantly recognised groupings in its capacity, or whereas we have senses for ‘ten identical items’ would maybe have inherent senses for concepts like ‘ten items that are all different from one another in ten specific but related ways’ - it would be wasteful for our brains to do that, but perhaps not for an entity with greater capacity.

I don’t think sight-recognising the count of a group of objects, regardless of arrangement is conceptually much different from, say, recognising the letter A, regardless of typeface or handwriting.

If you ask an AI to reduce recidivism in particular and crime in general, why wouldn’t it conclude that exterminating mankind would be the right solution?

There’s an example of something like that which I gave earlier in another thread. It involved this puzzle that I gave to ChatGPT:

A fisherman has 5 fish (namely A, B, C,D, E) each having a different weight. A weighs twice as much as B. B weighs four and a half times as much as C. C weighs half as much as D. D weighs half as much as E. E weighs less than A but more than C. Which of them is the lightest?

GPT was intelligent enough to solve the problem, but not intelligent enough to see a very easy way to the answer. Instead, it established a set of equations to solve the problem. But the thing is that the problem is easily solved by recognizing that each of the series of five comparative statements immediately rules out one of the fish. The first one immediately rules out option A. The second one rules out B. The third one rules out D. The fourth one rules out E. Bingo! One can immediately see that the lightest fish must be C. The fifth statement is redundant and is probably just there for obfuscation.

If this was a question on an IQ test and a human took the same approach as GPT, they would be hampered by the time it took to set up and solve the equations, whereas a smarter person would perceive the logical shortcut, giving them more time to complete the test and/or spend more time on the more difficult problems. It’s essentially the difference between a rote approach to solving problems versus a kind of intellectual creativity.

Ironically, a person would have to be fairly smart (or at least, well-educated) to even be able to attempt the method based on a system of equations, whereas not all that much intelligence is actually needed for the eliminate-all-but-one method. The problem isn’t that the AI is less intelligent than humans; it’s that it’s intelligent in a different way.

Exactly! So are we confusing the issue by using the same term (intelligent) for both cases.

Very true. And as I keep saying when AI is criticized for making trivial mistakes that even a child wouldn’t make, it’s also stupid in a different way. It has to be judged on overall performance, and not in the false belief that trivial mistakes are some kind of revelation about lack of “real” intelligence.

No, I think the term is very apt inasmuch as it implies certain cognitive skills, such as the ability to solve problems. What other term would you use for a machine that can rapidly solve arbitrary logical problems and answer questions that would be challenging even for most humans?

Yeah, part of the problem with any of this is how do you even measure intelligence - any test is going to be highly contextual and whilst you can measure things like the amount of time it takes a person to perform a mental task, is that really measuring the magnitude of intelligence, or just the clock speed? Is ‘quick intelligence’ actually more intelligence, in all or even some cases?

Certainly, in some cases. There are a lot of problems where there’s a quick method to solve it and a slow method, but the quick method is more difficult to find or implement. A more intelligent test-taker can thus finish a test more quickly by being able to use the quick methods more often.

But once you have a computer that’s capable of solving a problem, it’s fairly easy to solve it faster using the same method: Just run the software on a faster computer.

To be clear, what I was emphasizing in this simple puzzle example is that GPT is prone to falling back on setting up equations to solve logical problems even when it’s not necessary. In this case the point is that it was unable to discern the logical structure of the problem and had to rely on math to sort it out.

I believe ‘machine intelligence’ (MI) would suffice. That eliminates the human thought implication.

I don’t have an answer - it’s a question I saw presented somewhere else, and I have no personal skills with neural nets except maybe the one between my ears… The notion was that if various forms of perception could be developed to be more shall we say holistic than procedural, then there might be some emergent behavior. Might be standard AI handwaving, but all lot of our own intelligence is handwaving…

Does it really matter what we call it? Labels are not definitions and most people seeing the term ‘machine intelligence’ are just going to mentally synonimise it with ‘AI’

Yes it does. Current machine intelligence far surpasses human intelligence in its’ domain. Discussing human and machine intelligences as though they are equivalent is futile. Especially so without definitions of intelligence.

It’s the reason no AI system can pass the Turing test.

No, the reason no AI system can pass the Turing test is that, whenever one does, we redefine the terms of the test until it can’t any more.

Incidentally, the example of “solving crime” is a great example of what I was saying about how other intelligent beings always present a challenge for intelligences. If criminals were all stupid, then crime would be an easy problem. But there’s an incentive for people to try to get more than what they deserve, and so we’re constantly trying to outwit the people who do so.

Not so. The Turing test is well defined and has not changed.

I do agree with some that the Turing test is no longer appropriate. We should accept that machine intelligence is not human thinking and spend our efforts defining MI abilities and limits.

Subitizing - Wikipedia.

Great post.

The quoted snippet is why I am skeptical we’ll ever seen true AGI in the way it’s being foretold by some observers/commentators/“experts” anytime soon. AI is as good as its inputs, and even is impressive as ‘simple’ AI (e.g., OpenAI) is now, it’s still easily confused and spits out crap, even with a handful of fairly straightward inputs.

This is a thread that is literally about comparing the intelligence of humans vs machines (machines, moreover, that are specifically designed to try to imitate human-like intelligence). I’m not saying they are the same thing under the surface, and the discussion of how they differ is a perfectly valid one, but I am not sure it’s the one we’re having here.

Which is accurate up to groups of about 4. Beyond that, we’re back to counting.