Does intelligence stop mattering for problem solving beyond a certain point

Sure, but that’s possibly just a limit of our ability, not some fundamental limitation on what quantities can be processed that way.

I have read that statement repeatedly but I have trouble understanding it: What do you need pi for when measuring the size of the observable universe? It is a universe, not a spherical cow.

Not necessarily. If, for instance, the energy demands of this intelligence are too high it might starve before it can do anything about it. Looking at the electricity consumption of modern data centers it is possibe there is a point where becoming even more intelligent is more of a burden than a bonus.

Exactly: if an inferior intelligence could truly understand a superior intelligence it would become that superior intelligence.
The intelligences do not have to be superior or inferior, in case you dislike the term: Emotional intelligence does not understand numerical intelligence and vice versa, without having to say one is superior/inferior to the other.

Well, yes, and if you use

ETA: sorry posted in error - domestic emergency

The full statement is something like “measuring a circle the size of the observable Universe to a precision of the Planck length”.

Though the observable Universe is, in fact, a perfect sphere, no matter what shape the entire Universe is (or even no matter whether it’s even possible to describe the Universe in terms of shape).

I imagine dolphins or crows having this conversation. Trying to articulate what a much smarter ape could possibly do to solve problems better than dolphins or crows can.

Every example given is just our current understanding.

Yes, I addressed this in passing.

An AI that (a) is given the absolute task of reducing crime, absent other directives, and (b) has the unchecked power to implement its chosen solution, will probably proceed by simply killing a whole lot of people, perhaps based on some unexpected criterion like genetic predisposition to violent misbehavior or something.

No, my point is that — given the absolute task of reducing crime, absent other directives — killing everybody zeroes it out; there’s no need to apply criteria or figure out predispositions if you can instead opt for no humans being around to commit any crimes.

This is an assumption on your part that is not established to be true. It is a general assumption in the approach to unifying the laws of physics under a singular rubric that all mechanics are emergent from a simple set of rules but in fact we don’t even have a completely worked out theory of particle physics that predicts large scale phenomena from basic principles and really have no idea how to unify gravitation into the Standard Model at all, which would seem to indicate that quantum theory and general relativity are two facets of a much deeper and perhaps more complex mechanism (or, as believed by some fatalists, completely unrelated mechanisms that operate independently). And even within the framework of known physical principles, the complexity of emergent properties in magnetoelectrodynamics or solid state physics leaves many unexplained behaviors in even everyday materials.

As for von Neumann and a ‘McDonalds worker’ (by which I assume you are describing someone of mean intelligence) for certain neither is going to be more accomplished at tying shoelaces or reading very simplified English (and indeed, depending on experience the McDonalds worker may be superior at certain tasks), but the application of high intellect does not stop at everyday tasks. von Neumann—a famous gambler in addition to his technical work in multiple fields—would likely trounce the typical fast food worker at the poker table and certainly in assessing theoretical problems in game theory or mathematics. But intelligence is not a singular thing with a linear scale even though it is often inaccurately assessed in that way via the intelligence quotient; von Neumann’s (second) wife Klara, a largely unacknowledged founder in computer science and the practical application of what would become known as a compiler used with general purpose programming languages, would probably exceed von Neumann’s ability to use a computer.

As for ‘AI’, it should be understood that what modern generative AI is really good at is taking in large volumes of data, doing sophisticated pattern matching of often subtle or complex trends, and generating predictions. This is a kind of intelligence in the broad sense of the term although it does no indicate a deep understanding of the larger world beyond the specific data set, nor an ability to reason in broad abstractions, in which large language models (LLMs) often fail in hilariously naive ways. I am generally dubious about breathless predictions of when “AGI” will happen or indeed that the current approach can result in general intelligence of the sort imagined by enthusiasts, even as I’ll admit that LLMs have become quite adept at manipulating the English language, albeit with the use of ‘compute’ that rivals the collective computational throughput of all of humanity put together.

There is often a very self-impressed and even pseudo-solipsistic of people to believe that we are the apex of possible intelligence just because (at least by out standards) we are the smartest group in the local neighborhood (although I daresay that some of our pets regard us as curiously ignorant for urinating and defecating in a perfectly good self-filling water bowl) but a more objective view would show us to be just smart enough to extract energy and resources to build impressive systems but without the collective intelligence to use them sensibly and keep from wasting them on useless crap and contaminating our natural environment to a potentially existential degree. An alien civilization might come along and find us no more intelligent or self-aware than we would an termite colony, what with our primitive view of physics, our extremely limited ‘counting numbers’ based mathematics, and a complete inability to perceive more than four dimensions, and even then with an imperfect and limited degree such that we are completely stuck in a perception of one moment in time.

So it goes.

Stranger

Depends on what you mean by “well defined”. Yes, the scenario is well articulated, but ultimately the success or failure of the test is based on the subjective judgment of human observers.

What I think @Chronos is getting at is that in general, throughout the history of AI, whenever some AI achieves some goal hitherto thought to be unachievable, the skeptics merely move the goalposts. It was once thought that AI could never play better than a beginner’s level of chess. When the MacHack program beat the philosopher Hubert Dreyfus back in the 60s – one of the skeptics who had made that claim – the goalposts moved to saying that a computer would never play at a grandmaster level. When Deep Blue did just that, the skeptic community dropped chess entirely as a meaningful measure of intelligence.

I can’t imagine what evidence elicited such a bizarre comment. The only reason a modern LLM wouldn’t pass the Turing test is that it’s not trained to do so. If you want to know whether you’re talking to a human or a machine, just ask it, and it will tell you. But I have no doubt that if trained to do so, an LLM like GPT would pass the Turing test in the majority of cases if not all of them. I don’t know that it’s ever been formally attempted because it would be pretty pointless.

Where it might (or might not) falter is in tests of real-world concepts, relationships, and common sense, such as the Winograd schemas which have been proposed as a successor to the Turing test. But I tried one of the classic schema tests on ChatGPT and it had no trouble with it:

In the sentence, “The city councilmen refused the demonstrators a permit because they advocated violence”, who does the pronoun “they” likely refer to?

ChatGPT:

… the pronoun “they” is ambiguous, but the most likely referent is:

:backhand_index_pointing_right: “the demonstrators”

Reasoning:

  • Causal structure: The sentence says the permit was refused because “they advocated violence.” It makes more sense that the demonstrators were advocating violence, which led the councilmen to refuse the permit.
  • Plausibility: If the councilmen themselves were the ones advocating violence, it would be illogical for them to refuse the permit on that basis.
  • Common usage and expectation: Readers usually infer that people are denied permits due to their own actions or intentions, not the authorities’ actions.

Take Turing himself, or any of his contemporaries, and sit them in front of a terminal window chatting with ChatGPT, and they wouldn’t be able to tell. We modern folk can tell, because we’ve learned the tricks to it, but it would never occur to Turing himself that “count the number of Rs in ‘strawberry’” would be a worthwhile question to ask.

Which is yet another example of other intelligences being the ultimate challenge. Because we’re intelligent, we have learned to tell the difference, so the chatbot has to learn to be even better, in order to fool us again.

Yes, we have our own version of the Voight-Kampff test that works.
For now.
It won’t work for long. The Turing test of the gaps.

Turing: “Who the fuck types with em dashes everywhere?” :slight_smile:

How did humans learn to count to three. How did some humans learn what a “R” is?

Ask a monolingual Chinese person, quite intelligent in fact, and in English, “How many “Rs” are there in strawberry.

You would ask him in his language, I guess, or he would not understand you – he is monolingual, after all. How do you say “strawberry” in his language? Can you? How many “Rs” does that word have? Does the question make sense?
At least I got to use an em dash like a good AI.

Im saying ask the monolingual Chinese person, who does not speak English or understand the Roman alphabet.

Ask them, in English, how many Rs are in strawberry. Would you expect a correct answer or a confused look on the persons face?

Is that person not intelligent?

I think in this case I would say it is the person asking who is not showing signs of intelligence. I have seen people trying to communicate with people who did not speak their language. Some used signs, drawings, tried synonyms, with varying success.
Some simply spoke louder. That made no sense and did not help. Never.

There’s also the matter that, sometimes you can distinguish the AI because it’s more capable than a human. If I’m chatting with someone and ask if they can write a Spencerian sonnet on the topic of the multiplicative property of logarithms, and they give me one in five seconds, whatever they are, I know they’re not human.

I’ve read the entire thread, but this early post makes a good starting point for my usual history lecture.

Frederick Winslow Taylor started “scientifically” examining work in the late 19th century to try to determine the most efficient procedures that would maximize production. He became enormously influential, creating a cult-like following for Taylorism and the “one best way.” That attitude also influenced engineering, so that the notion that a best solution could be found and applied for any problem spread. Any problem, even political and social ones, as seen by the Technocracy movement. Classic science fiction was dominated by John Campbell and his crew who profoundly believed that engineering minds could find the one best solution to all the world’s issues.

We don’t believe that anymore. It’s not because we’re smarter - sf writers often were incredibly brilliant - but because we know better than to think that human problems can be solved by technology. More, most of us have begun to understand that the best way can not come strictly from straight white male Christian middle-class Americans in the American century.

My point is that sheer intelligence is merely one tool necessary to solve problems. Even more important are the assumptions that get made that guide the intelligence. AIs are trained on the past and the past is known to be narrow and biased. As trainers become more aware of these limitations, they presumably will make adjustments. Yet in the foreseeable future AI will be too tied to human failings to overcome humanity.

What would happen if they do move past human assumptions is unknowable. Monkeys cannot imagine what more intelligence - of a totally different kind - gave to genus Homo. Different is not necessarily better - monkeys are still far better than humans at living naked in a jungle, and lots of people believe that humans have seriously screwed the planet. We certainly did it differently, though. So will AGI.

Yeah, this is true. An advanced intelligence may be able to create new universes, or travel within an omniverse, or reverse entropy, or travel back in time, who knows what else. Those are just the things we can comprehend them doing.

Modern science is already well beyond what a human mind can comprehend, much less advance. We just get around that by splitting up everything into fields and specialties narrow enough that a human mind can understand them. And so far it’s worked really well, we’ve been able to split up big problems into smaller problems that are simple enough for us to unravel.

But what if the universe isn’t limited to such problems? For all we know that are entire aspects of the universe that we can never understand because they can’t be broken down into pieces small enough for an evolved ape to figure out. It would be oddly convenient if literally everything was within the comprehension of a species that has the minimum needed intelligence to develop science at all.

If there are such problems, then a superhuman intelligence would be needed to actually understand them. Anything else would be doing the metaphoric equivalent of trying to swallow something larger than their head.