We humans like to think that anything we can dream of is ultimately possible. But while the last century produced radio, rocket ships, radar, airplanes, nuclear power and trips to the Moon, we’re still struggling with cold fusion, flying cars, jetpacks and space elevators.
In fact, many of our mundane products have actually gotten complicated enough that regular people struggle with their use.
Is there a point where we simply cannot produce humans smart enough to plan a trip to Jupiter (let alone the next star system). Or even if we could produce them, they would have to turn their intellect in creating solutions to keep the billions of dumber humans from destroying the planet?
I’ve heard the word extelligence used. Intelligence is what’s been developed inside a single person. Extelligence is what’s available outside - books, websites, etc. I’m going to say that includes having social and political processes that allow intelligences to be used effectively.
So I’m going to say that it’s not dependent on our intelligence half as much as on our extelligence.
What will limit us will be our lack of wisdom to properly use/control that which our intelligence has created. We have created nukes, chemical weaponry, biological weaponry that is capable of driving us into extinction. An accident could end life as we know it ala The Stand.
I believe the feline is the most perfect of the planets lifeforms: Its beautiful, graceful and is just smart enough to exploit the environment without destroying it.
I don’t know. I’m not smart enough to figure that out.
You make a good point, but I think the OP is missing the mark. We could send someone to Jupiter today if we had the will and the money. It’s not lack of smarts that is keeping us from making that trip. And it’s unlikely we will “destroy the planet”. Species come and go, and we will go at some point. The sun will die, and the earth will be no more.
I think we’d like for them to come back too.
The question isn’t “will we destroy the planet”. Simply if people think there is some ultimate level of technology we could obtain. Do you know how to build an interplanetary rocket to Jupiter? Because I don’t and if every human was about as smart as me, I doubt we’d ever get there.
Technically all threads should resolve with “the sun will die in a few billion years.”
I once read that 1/3 of returned electronic devices returned for being broken worked fine, it was just the person couldn’t figure them out. But it doesn’t matter virtually everyone except a small segment of humanity has no role in innovations and design. the average person doesn’t need to be smart enough to plan a trip to Jupiter only a small segment need to be.
Plus eventually we will expand human intelligence so the argument is moot. it may take 20 years and it may take 120 but intellect won’t be limited by genetic chance for much longer.
That is what cooperation is for. You don’t need to know everything about the trip to Jupiter to make it happen. It’s enough if you can solve some small part of it. Even in a medium sized company, it’s usually the case that the top management has no idea how stuff gets done at the company, and equally true that most of the people working there have no idea what management is doing. And yet, at least some of those companies work out fine.
Better than fusion, perhaps, but still very iffy. At this point we don’t understand enough about genetics – or the role of factors in prenatal development.
Futurists posit humanity will bootstrap our way to increased intelligence with genetic engineering, cyborg implants, or entirely replace our crude brain with an AI that makes us look stupid in comparison. But this all assumes we’re smart enough to begin the first step. Or that machine intelligence can add much to our cognition other than number crunching or sheer speed of operations per second.
Does human strength limit how large of a rock a human could move? Of course not, we can use machines to move very large rocks indeed.
Similarly, we’re already using machines to perform computational tasks that humans could never do on their own. We’re using machines to design cars and airplanes, tennis rackets, and new, smarter machines.
So, we will continue to build smarter machines and continue to make intellectual progress. Whether we graft these machines on ourselves, work with the machines, or allow the machines to work pretty independently, I don’t see how we can see the intellectual limits of mankind and our tools, and whether those limits even exist (leaving aside universe heat-death, sun exploding, etc.).
Exactly. Hell, most people don’t even know how to grow or harvest the food they eat…nor do they have too. They don’t have to know how to build the cars they drive either, or make the fuel that runs them. To go to Jupiter (not sure why we would want to but what the hell) you’d need thousands of specialists, none of which would know how to build or run every aspect of the mission. That’s how human civilization works. Even when we were hunter gatherers I doubt every member of a tribe or clan knew how to do everything and could do it all well. You had people who specialized in tool making, and people who were good hunters or good gatherers, or knew something about herbs and medicine. If every human had to know or do everything then we probably wouldn’t have ever progressed scrounging for snacks on the plains of Africa.
My WAG is that humanity is on the cusp of nearly unlimited potential growth in collective knowledge, as our data systems and information technology becomes more and more pervasive and ubiquitous. Maybe that singularity thingy some talk about, but at a minimum a global data network accessible to an enormous number of humans, giving the ability to share knowledge to a degree never seen before…and increasing almost daily, world wide, giving even more people access to information and ideas they would never have had otherwise.
From Bruce Sterling’s SF short story “Swarm”: Afriel, a human agent of the Solar System’s Shaper faction (specializing in gene-engineering), at war with the Mechanists (prosthetics and technology), has been sent to the Hive, a cluster of asteroids in a distant star system, where, in air-filled tunnels burrowed through the rock, live the Swarm, a race of nonsentient beings with many specialized castes, plus about 15 non-Swarm species of “symbionts” inhabiting the same space. His mission is purportedly scientific study, but his real mission is to domesticate the Swarm, alter their genes to make them produce things the Shapers can use. At the end, his partner, the (real) scientist Mirny, vanishes, and Swarm of the soldier caste arrest him and take him before what appears to be a new caste, a Swarm with a giant brain, which has absorbed his partner’s mind and memories through a tentacle thrust into her head, so it can now speak her language. Afriel’s pheromonal experiments created a chemical imbalance which the Queen detected, triggering genetic patterns, causing the brain to be born to deal with the threat.
Some things can’t be done no matter how intelligent you are. Cold fusion may well be one of them. Flying cars and jetpacks have been done, but it has generally been recognized that they are not things actually worth having. As for space elevators, I see no evidence that the problem there is lack of intelligence, as opposed to expense and lack of economic or political need.
Maybe there are useful technological advances that a more intelligent species could invent and make use of, but that we never could manage. However, you have not made a case that that is so, or even that it is likely.
I don’t believe that. There’s no reason to assume that our brain, whose job was to prevent us from ending up as a lion’s snack, and only coincidentally also allows us to understand the theory of relativity would be a sufficient tool to, say, find the answer to the “ultimate question of life, the universe and everything”.
IMO, one must believe in some higher purpose to think such a thing. If you believe in a creator god, for instance, I guess you could envision that this god made us so that we would be able to ultimately understand everything. If you believe we’re the result of some semi-random process, there’s no reason to assume that our intelect would be sufficient to uncover all the mysteries of the universe.
Half the time the issue isn’t that the products are really that complicated, but rather some combination of 3 things is happening:
The consumer doesn’t give a shit. People often didn’t set their VCR clocks because it was really all that complicated, but becuase they weren’t time-shifting and therefore didn’t really care that it blinked “12:00” constantly.
The documentation sucks. I can’t count the number of times that I’ve received documentation for products that seems to have been written in Kazakh, translated into Chinese, and then again into English.
The user interface sucks. This ties into nos. 1 and 2 in that if it’s too convoluted, or the instructions are really hard to understand, people are only going to figure out the bare minimum of what they need to do in order to make the thing do what they want.
I don’t think much of anything is so complicated that the average person has a hard time understanding it, unless the average person is far stupider than I’m giving them credit for. I think the vast majority of that lies in the 3 points I listed above.
As for the OP, I don’t think that’s really possible; I think what’ll happen is that projects above a certain size will require very different management schemes that are fault tolerant and/or resilient, just due to the sheer scale and size of the undertakings.
I think we’re seeing the intelligent people protecting the race and planet (or trying to) from the legions of the dumber people with the whole anthropometric global warming issue right now; most people don’t give much of a shit- it’s academic, it’s not something they feel they can control, and there’s no real alternative to using electricity or driving cars for the vast majority of them. And to the ones in the developing countries, they give less of a shit about global warming than having their first automobile or having electric lighting.
We might not be smart enough to keep from ruining things for ourselves. We’ve done it quite a number of times already, but fortunately, that only applied to a specific culture or island. We might very well do it for the planet. No, the planet won’t be ruined, but humankind might be.
We will eventually reach the point where things are changing so fast technologically that we can’t keep up. Of course, as mentioned above, we’ll have artificial intelligence to help us out with that, but will it help us understand the consequences of our values, fast enough to avert serious problems? I sure hope so …
I doubt any of us can come very close to what humanity will look like a thousand years from now (assuming we last that long – I’d guess that we have at least a 50% chance of making it.) Regardless, we’ll definitely face issues that are too complex for us to understand, without assistance. And the things we understand “with assistance,” will we really understand it well enough?
A tangential question: might there be facts of science (e.g., fundamental physics) that are knowable but we’ll never be smart enough to get there? I sure don’t know.
I’d be shocked if we, collectively, will look much different than we do now.
I have often postulated that the universe is going to be too complex for the human mind to fully understand. We certain didn’t evolve under conditions where that was even remotely necessary.
Being able to effectively run an economy is included in “intelligence”.
Computers can perform the billions of calculations needed to model better aerodynamics for cars and airplanes. They still needed humans to discover the formulas needed to make those calculations. And computers won’t tell humans not to use those extremely efficient airplanes to drop laser guided smart bombs on each other because some people believe in a different fictional being.