Does human intelligence limit mankinds ultimate growth?

I can’t help but ask: can the humble lion-trumping brain that produced spears and skyscrapers and spaceships produce an improved brain? Defining “an improved brain” as one that can then produce an improved improved brain? Which, in turn…

Again, is that intellect at least sufficient to produce a better intellect?

A couple of points.

“We could do X, Y and Z only if” does not invalidate my question. “If” represents a significant engineering, economic or social problem we haven’t figured out how to solve.

We could create a Utopia IF humans weren’t such dumbasses. But the problem is that we are.:wink:
Being able to augment human intelligence with genetic engineering or cybernetics isn’t necessarily a solution either. Odds are, like most technology, it won’t be accessible to everyone. And then you run into the same historic problems where those with access to superior resources, education or talents develop them for their own benefits while millions or billions of people sort of fall by the wayside and resent it.
Also remember that for all the talk of Big Data predictive analytics decision support systems business intelligence stuff, ultimately all that information is being distilled down to a problem or question simple enough for a (comparatively) dumb human to solve or make decisions off of, using algorithms developed by other (comparatively) dumb humans.

To me, this question boils down to, can mankind create an intelligence smarter than us, even slightly? If so, then that intelligence may be able to create an intelligence smarter than it, and so on, leaving mankind’s ultimate growth bound only by laws-of-physics and resource-limits types of constraints. That is, we may not be limited by our intelligence, but there still must be other limits.

My opinion is that we will one day create thinking machines, so I don’t think our inherent intelligence will be the limiting factor, just like our inherent strength or speed hasn’t been a limiting factor. And, we’re already using machines to augment and improve our thinking, and are able to design and produce things we would not be able to without machine help.

Not necessarily.

Besides, we might totally lack abilities that would be necessary to “understand the universe”. For instance, a common benchmark in animal intelligence is self-awareness that some animal possess and others don’t. I’ve seen argued that all animals (including chimps) lack a sense of past and future.

We have both these abilities, but we might lack the wurchling ability that would be required for a full understanding of the universe. Not even having a concept of it would prevent us from creating an improved brain or computer possessing that ability. “We aren’t intelligent enough” would be a matter of degree and could possiblity be solved. “We completely lack X” would be a matter of nature and probably couldn’t be solved at all.

We’re already straining our limits. Significant parts of modern science can’t be “grasped” by our brain (say, the existence of more that 3 dimensions) or flatly contradict our logic ( say, a single photon going through two slits at the same time). There’s no reason to assume that our ability to do maths isn’t limited too. Or that mathematics can explain everything.

Assuming that we happen to have all the abilities required to eventually understand the universe is a leap of faith, IMO. We might lack one, or a dozen of them. And we woudln’t be aware of it. There’s not even any reason to believe that we could figure out what we lack, either. It would be IMO a strange coincidence if our evolution-given brain happened to be exactly what is needed when there’s no direct relationship between avoiding lions and figuring out that 42 is the answer to everything.

Only if intelligence, our peculiar brand of intelligence, is all that is and could be. It seems incredibly self-centered to me to believe so. Akin to assuming that earth has to be the center of the universe, for instance.

I don’t know how you get that from what I wrote. I specifically said we would create a smarter intelligence, so I’m specifically not claiming that our peculiar brand of intelligence is all that is and could be. I guess I could have written “a different kind of” intelligence, but that seems like a subtle difference.

Anyway, we’re way off in IMHO territory at this point, since I certainly cannot cite that we’ve created better or different intelligences yet. It’s my opinion that we can, and so we won’t be limited by that factor, just like our running speed didn’t limit how we get from NY to France and our lifting strength doesn’t limit us to moving 200 pound rocks.

I will claim, though, that we have created tools that expand our intellectual capabilities beyond what we could achieve without them. I guess for a cite I would start with CAD systems: Computer-aided design - Wikipedia

Fortunately we are able to model many things with mathematics, even if we can’t intuitively understanding them (like i^2 = -1 ). But to your point, there may be a limit to what we can model based on our puny human math.

Have you seen what a common housecat can do do a population of native birds? “The feline” is no conserver of environmental balance.

This simply isn’t true. Even if we had the propulsive capability to make a crewed transit to Jupiter and back, there are still numerous technological hurdles that we would have to overcome with regard to long duration habitation, radiation abatement, et cetera on top of dealing with the normal social dynamics of a small group in a confined space for years on end. We may be able to do this at some point in the future, but we cannot do it “today” (i.e. at the current level of space habitation and propulsion technology) with any amount of effort.

Well, some of those companies work out fine for some duration, and then they suffer massive problems due to disconnects between the corporate policymakers and the worker bees who understand how actual products do and don’t work. A lot of business decisions are far more random that you may believe, and a lot of “management theory” adds up to “We don’t really know, wag a guess.”

The actual practice of managing complex programs and organizations that cross disciplines is called operations research and systems analysis (or in a technical arena, systems engineering), and quite frankly, nobody does it especially well, at least in terms of retrospective efficiencies. Even the basic theory about how to manage this class of effort well is complex and not globally agreed upon. “Thinking in systems” is crucial to being able to handle large amounts of variable data effectively, control multidisciplinary developments, and prepare for multitudes of seemingly unlikely events, but it is not something that human beings are really evolved to do past what is immediately in front of us. It takes a fundamentally new way of thinking in somewhat the same way that chess requires fundamentally different skills than checkers.

The machines we use to “perform computational tasks that humans could never do on their own,” are really nothing more that very, very complicated abacuses and cash registers. Even the most sophisticated “fuzzy logic” device is essentially doing tasks that have already been figured out in the general sense by a person and are simply capable of running an enormous number of calcuations. This is helpful–a CAD system which can determine interferences between solid parts can save person-years of development errors–but no machine can design up a new part or component without being given a pretty explicit set of rules, and there is no indication from the field of computational cognition that this will change any time soon.

There is a difference here between “knowing” and “modeling”; we are already beyond the point at which physics at the most fundamental levels we know (quantum field theory, cosmology, string theory or alternatives) is truly understandable in any kind of intuitive sense. The physics at these levels is so different from our everyday experience that there simply is no intuition about these things; you simply have to do the math and trust that if your answer predicts observed behavior to some acceptable level of fidelity, it must be representing (modeling) the physics accurately, even if the process appears nonsensical. There is no reason to believe that we’ll be hit some kind of limit where we cannot improve our representation of fundamental behavior, but it doesn’t mean that we will “understand” the physics better in any sense that could be explained to an eight year old.

Well, it isn’t so much that the instructions are too complex or whatnot, but rather that it takes a certain amount of time for technology to mature sufficiently such that the interface is useable by an average person. My correlary to Clarke’s famous law is that “Any sufficiently mature technology is as simple to figure out as a flashlight,” and indeed, the push with a lot of consumer level technology is to make it more intuitive–hence, tablet computers with gesture input or GPS units with built in maps–but with something as complex as a computer or car it is difficult to make it truly simple without offloading a lot of the basic processing and decision making from the user into the device, and this requires capabilities which are still developing. You shouldn’t mistake these for fundamental intelligence, though; these are making operations and data easier to interpret, but do not replace the multifaceted and intuitive judgements which allow people to make decisions and draw conclusions with seemingly incomplete data.

Stranger

In all fairness he said we could send a person there. He didn’t say anything about getting them there alive or bringing them back.

One thing I would like to point out is:

How can human ever know if there are things that they are innately unable to grasp, sense or understand? It’s like the whole known-unknowns vs unknown-unknowns. There could definitely be forces at work in the universe that we will never understand or even sense. And how can you program a computer or machine to do or compute things that humans cannot sense or understand? And of course, science, is all about asking questions but there could be questions we are unable to ask.

Does my dog know that he’s missing out by not watching Breaking Bad with me? Will he ever? Will dogs ever? And if they do, would they still be “dogs”, or something else?

Because the human brain is definitely finite, the limits of human intelligence and understanding must be finite. Unless you believe in some spiritual intelligence where the human brain is somehow augmented by an outside source.

(highlighting mine)
The OP doesn’t ask whether this will change any time soon, but whether it can change in principle. That is, “will human intelligence ultimately limit our growth,?” not whether it’s slowing us down now. My opinion is that intelligence won’t be the limiting factor, since I don’t see any reason why we can’t make a smarter or different intelligence. Hey, our brains are nothing but a very large number of analog abacuses and cash registers, right?

I agree that it’s possible that there are things out there that humans, even with our tools, cannot sense. So far, of course, that hasn’t been a limitation, right? We can’t sense x-rays or cosmic rays innately, but we can with tools. And, by detecting and measuring cosmic rays, we’ve been able to have some seemingly deep insights into the origins of the universe. But, whatever it is that we’re hypothetically missing also cannot have any effect on things that we can measure. And, that just seems unlikely to me.

Agreed that the human brain is finite, but many people together can accomplish things that one person cannot, and many people along with whatever AIs we manage to put together will be able to accomplish more, and so on.

Believe me, I don’t think of humans as specially selected or anything like that. It’s just that intelligence seems so far to be a general purpose tool, much like a computer is a general purpose tool. We happen to have filled that biological niche and dogs didn’t.

No. Our brains are a very large number of abacuses and cash registers which are capable of modifying themselves during operation to achieve what we perceive as self-awareness, which is really just an emergent property of the inherent complexity and fungibility of the system. How to produce that artificially is a gulf that nobody working in cognitive science today really has a good handle on how to achieve, much less when it will occur.

Stranger

But, really, if you’re saying that the human intelligence is just a general purpose tool, then you’re pretty much saying it’s limited. And is only being augmented by an ever greater use of tools, ad infinitum. Thus, our technological advancement could outstrip out innate intelligence. I think about half of our science fiction stories are about that very topic. Terminator, etc.

Anyway, I think there’s definitely a limit to human intelligence and it might be augmented in many ways, but it’s still limited. And if it wasn’t limited or it was increased by a significant magnitude then we could no longer be termed “humans”.

[QUOTE=tullsterx]
But, really, if you’re saying that the human intelligence is just a general purpose tool, then you’re pretty much saying it’s limited. And is only being augmented by an ever greater use of tools, ad infinitum.
[/QUOTE]

It’s the same ‘general purpose tool’ that took us from the plains of African and stone tools to a global species that went to the moon, from cave paintings to quantum physics. Maybe there will be limitations in the future, but humanity has been critically linked to tool use since before we were homo sapiens. Without tools we wouldn’t survive as a species, and as we have become more sophisticated so have our tools. I don’t see why this is a problem or why that imposes some sort of limiting factor on us in the future. Certainly we don’t seem to have run up against any limits as yet, and in fact it seems to me that our knowledge and understanding of our world and the universe is accelerating, not slowing down.

Why? Were we humans 10,000 years ago? Were we humans 100,000 years ago? How about 1,000 years ago? What about 100 years ago? We haven’t physically changed much in any of those time frames, yet our collective knowledge is hard to even quantify the difference between even 100 years ago and today, let alone 1,000, 10,000 or 100,000 years ago. Again, this trend seems to be accelerating, if anything (with some notable periods of stagnation)…yet I don’t see how we weren’t human the whole time.

Can you explain how you arrive at this?

For example, there is something we are “missing” that scientists have named “dark matter” in an attempt to explain why there is “too much gravity” (as Neil DeGrasse Tyson describes it) in the universe, which we can measure.

Is it possible that it’s just an inherent property of a sufficiently complex system, and that eventually one such system would simply become self aware like in the movies or is that just a bunch of Hollywood woo?

It seems to be an important difference to me. I gave the example of animals not having self-awareness or a sense of the future. If we similarly lack a capacity, in all likelihood, we won’t be able to build an AI with this capacity since we will be unaware of its existence and probaby unable to conceive it, in the same way a lizard doesn’t have a concept of “me” and a cat doesn’t have a concept of “tomorrow”.

And I’m arguing that it is likely that we don’t have all the capabilities that could be required to understand everything unless we have been designed to understand everything.

My belief is that it isn’t human intellectual capacity that is the limiting factor, it’s the human emotional capacity. As a species we do not appear to be capable, for example, of caring much at all about people we are not bound to by familial or tribal ties, especially people who we cannot see, such as those who may exist in the future.

This quality makes it very unlikely we will do anything intelligent such as preserve the planet’s capacity to support us. The “us” is too big for us to comprehend emotionally.

And obviously, dark matter is within our reach since we can conceive its existence, deduce its properties, etc… But something else might be forever away from our reach because we’re unable to conceive it, like the dog in the example won’t ever be able to understand the concept of “TV show”. Like the dog seeing the images on the screen, it’s possible we could notice the consequences (i.e. “there’s something rather than nothing”) without ever being able to explain them. Or maybe we won’t notice anything at all and our understanding will be incomplete without us even knowing it is.

And though we could possibly improve a dog brain so that it would get this “TV show” concept, it’s extremely dubious we could improve our own brain so that it could understand something we can’t even conceive the existence of.