Yeah, stuff like, oh I don’t know, say the atom. We’d never have been able to conceive of that 10,000 years ago! No way we could evolve to see that. Or…well, a TV. I’m fairly sure that 100,000 years ago such a device would have been literally inconceivable. Heck, it’s possible that our ancestors wouldn’t have even seen or understood the picture. As little as 500 years ago the notion of radio would have been magic and inconceivable, and even 100 years ago the idea of something like the internet, a world wide data network with literally a billion+ users would have been totally…well, you get the picture.
Yeah, it’s possible that there will be things we will never know about. But Dark Matter has only been conceived for a decade or so and we are worrying away at the problem like a dog with a bone, picking at it and inventing ways to probe the mystery. I’d guess that in my lifetime there will be some progress on this. In my lifetime we went from theorizing that extra-solar planets exist but no way to prove it to finding extra-solar planets regularly.
Not necessarily, as time passes new technologies become cheaper and more effective and reliable. Laptops used to be extremely rare, now people in the developing world are getting them. Most people in Africa have access to a cell phone and antibiotics.
So when these technologies come out, it should only be a few decades between when the early adopters get them and when everyone else gets them.
Then again, cars have been around for 100 years and are still financially out of reach for the majority of people on the planet. But I’m assuming the amount of raw material that goes into genetic engineering or cybernetics would not be a limiting factor in affordability of these devices.
No, it isn’t just the number of connections or logic gates which is required; the system actually has to be capable of modifying itself and making new connections. No simple machine–even a very elaborate state machine such as a modern distributed cluber computer–is capable of this to a useful degree. The most sophisticated “learning machines”–those use for visual processing–are just barely capable of distinguishing arbitrary patterns and movements, but only to the extent that they can response with variations of programmed stimuli.
Regarding the self-modifying computer, I wrote some self-modifying code back in the '80’s on an Apple ][. I agree that it wasn’t super-complex, but it was useful (but difficult to debug!). And, neural networks modify the weights between the “neurons” and can make new connections, etc. Even if self-modifying code at the machine code level is difficult and not very useful, at higher levels there’s no reason you cannot have a learning machine.
Anyway, I’m not really up to speed on the current state of AI, and I’m certainly not up on where things will be in 100 years or 500 years. However, in my opinion, there’s nothing different in principle between a human brain and a (very complex) machine. If anything, though, our progress is accelerating, IMO. The difference between the level of progress 1000 years ago and 500 years ago seems to be smaller than the difference between 500 and 250 years ago, 250 and 100 years ago, and 100 years ago and today.
In any case, this is not really a debate, right? It’s all IMHO, and I’ve given my very humble opinion, so I don’t see much point in further back-and-forth.
I’m optimistic that our intellectual progress will continue to accelerate, and although I may never get the flying car, I may someday get a pretty intelligent AI to do my thinking for me.
This is an important point. Lacking any notion of X, or what ‘wurchling’ means, could we ever develop minds that had those abilities? Maybe, maybe not. By designing many different kinds of artificial mind, and by plugging our own minds into external augmentation routines that increase our own versatility, we might not beable to find X, but we could probably find Y, Z or θ. I’m pretty sure that among our first attempts at exploring the possible phase space of artificial intelligence and augmented human intelligence we will find some pretty weird forms of mentality. A self-aware traffic control system? An artistic airconditioner? A sarcastic search engine?
By allowing these new forms of mental processing to interact with each other and to evolve over time then we will be able to explore the phase space of all possible forms of mentality.
Maybe with enough time and experimentation we might stumble across X, or find out what the significance of ‘wurchling’ is. Or maybe we won’t. X, and wurchling, may be ineffable.
And it is entirely possible that in the act of creating a population of highly diverse minds with different ways of looking at reality, we might simply create a population of mutual antagonists which loathe, and wish to destroy, one another.
I don’t buy the idea of intelligence being a limit in the first place.
People talk about the limit of what a cat can understand, say, and talk about what the corresponding limit is for humans. But there are very crucial differences between us and cats: we accumulate knowledge and we can represent information abstractly.
I think once a species crosses that line, any solvable problem is solvable by that species, by a slow ratchet effect if nothing else. The rate of progress might be laughable to the CleverCloggians’, but it’s like comparing Turing machines: there’s nothing they can figure out that we cannot eventually.
A TV isn’t inconceivable. I could explain what it does to the most primitive tribesman, even 100 000 years ago, and he would understand. And he could understand the concept of dark matter too, given enough time.
You state that there’s a crucial difference between us and cats. That’s precisely my point. Why couldn’t there be something equally crucial that we lack because it was useless to fend off lions but that would be necessary for a full understanding of the universe?
We know that such a thing as a crucial difference exists. By definition, we can’t know what kind of stuff we might not be able to conceive. So, there might be a dozen crucial differences between us and something that could understand the universe, for all we know.
And since we already have several examples of such crucial jumps in ability between, say, a bacteria and us, how could we assume that we have reached the highest vantage point?
And of course, “understanding the universe” might be completely nonsentical or meaningless. But that’s another issue.
You miss my point. I’m not saying look how smart humans are to cats. I’m trying to say what the distinction is between sentient and non-sentient life.
Humans could be the dumbest sentient species this universe has produced, and I’d still think there can be no formal limit on what we can learn.
If this seems too optimistic let me put it the opposite way: A situation where a sentient species is affected by some phenomenon, that in principle they could detect, is not a stable situation. It’s not an equilibrium.
If it’s possible to gain knowledge, a general intelligence can gain knowledge of the phenomenon, no matter how slowly.
(And it’s hard to imagine a situation where a phenomenon could affect us without it being possible to gain knowledge. Even an awareness that there is a phenomenon is already a gain in knowledge).
That’s not to say that other things could not halt our progress; there may be phenomena that we could never discover anything about, in principle.
All I am saying is that the extrapolation from animals that is basis of threads like these is flawed. The thing that limits what cats can learn doesn’t apply to sentient organisms, so there is no prospect of us being stopped by that kind of ceiling.
I still fail to see how it contradicts my point. You admit that there’s a barrier between sentient and non sentient species. That the difference is not merely quantitative but also qualitative. (and I will add there are other such qualitative jumps : for instance between a creature with a nervous system and a creature lacking it).
I posit that there might be other similar barriers. I’m not stating “humans might not be very intelligent”, I’m stating : “Intelligence isn’t necessarily enough. We might lack something else”. Something that would differentiate a creature possessing it from a creature not possessing it as drastically as sentience differentiates us from cats.
The way it’s sounding to me is like there would be something that we could detect, either using our senses or using some kind of equipment, and then being unable to understand or figure out what’s going on?
Or, is it more of a matter of not being able to conceive of something that if we could conceive of it, we could build equipment to detect and measure. Kind of how someone living in the tropics in antiquity might never have known that water actually solidifies if it gets cold enough?
Could you ask that primitive tribesman that you can supposedly explain it to (:dubious:) to invent it from scratch? I’m thinking no…you couldn’t even go back to Victorian England and have a reasonable expectation of explaining your understanding of TV and getting a working model.
The point here is that with our supposed limited brain our species invented or created all of this stuff from the same basic hardware that we had 100,000 years ago. We went from stone tools and cave paintings to quantum physics and space flight. The same tools that allowed us to understand and use fire allow us to accelerate particles to a fraction of the speed of light, smash them together and see the results, furthering our understanding of the early universe.
Yes there might be other barriers, I wouldn’t disagree with you on that.
All I am saying is the extrapolation from animals to humans, and saying that there might be similar limits to our understanding, doesn’t work.
We might stagnate: take ages to solve some particular problem. But that’s not the same thing as saying our minds have a “ceiling”. Sentient species do not have such a ceiling.
What I mean is, IMO it’s analogous to Turing machines. Any problem an individual Turing machine can solve, can be solved by any other (and yeah, that includes what a quantum computer can solve).
When accessing how far we can go as a species, its far more useful to consider our collective intelligence than any individuals, in my opinion at least. It has been mentioned already, but consider how any group has specialist roles. This is without taking into account any potential assistance from technology.
It isn’t just what we can figure out, it is how we can invent things to figure it out for us or augment our minds to make us better at figuring things out. Richard Feynman said ‘I think I can safely say that nobody understands quantum mechanics’ which considering that our brains are designed to help us survive in Africa and rise to the top of the social pecking order, isn’t surprising.
However on a long enough timeline I see no reason to think cognition will be limited to biology. Transportation is not limited to biology, cognition will go the same route. We already have the tools (like the internet) to find advice from all over the world. We can increase the speed of cognition because you can follow other people’s advice, but by and large increasing the quality of cognition has not gone up much. Technologies like genetic algorithms and other current computer models may create solutions we couldn’t think of ourselves, but that is just a start.
So even if you are right, I see no reason that over the next few thousand years we won’t figure out newer and better ways to engage in cognition, allowing us to understand the universe in finer detail.
But precisely, using your analogy, I’m refering to a problem that couldn’t be solved by any turing machine, regardless how advanced.
And how could you know that sentient specie wouldn’t have any ceiling? What make you believe that sentience is sufficient? You’re somehow assuming that sentience sits at the upmost level, and that there can’t be any other above it, or, rather, besides it. You couldn’t possibly know what ability we lack, and there’s no reason to assume we have all abilities required to understand everything, because our abilities haven’t been selected for understanding everything, but rather for surviving in a specific environement.
Right, and I am agreeing with you that that’s possible, I’m just saying the cat analogy and extrapolation (which is the whole reason for threads like this existing) doesn’t work. It’s not that humans have a higher ceiling than cats. It’s that the ceiling that applies to cats is irrelevant to sentient lifeforms.
Now we could say maybe there is a ceiling of a different type.
Sure. And maybe there is a planet of leprechauns with golden gonads. In both cases we have no reason to think there is a such a thing at this time.
Certainly no reason to think there may be a limit that would only apply to a subset of sentient organisms, and not to all.
I wouldn’t be too surprised either way. With a caveat, of course (and maybe more than 1K years). Poor people of the future will probably look like poor people today. The question in my mind is what wealthy people will look like.
[/quote]
I have often postulated that the universe is going to be too complex for the human mind to fully understand. We certain didn’t evolve under conditions where that was even remotely necessary.
[/QUOTE]
Yeah, no kidding. The fact that we learn to read so well still amazes me. You’d think we evolved for it.
Good point, and granted. I simply wonder about how, as our knowledge gets further and further from what we can intuitively grasp, how well we’ll be able to manage it. (You referred to this kind of problem in your discussion of systems engineering.)
Yup, no kidding. And of course, this collective intelligence includes artificial as well as natural intelligences.
I wonder at what point the whole mess (or portions of it) might become self-aware. For example, could a corporation experience qualia? But that’s sci-fi.
I think I can vaguely glimpse some of these barriers, and even more vaguely glimpse ways that we could get round them.
1/ Better memories. If we could remember events that we have experienced more accurately, we could perhaps avoid some of the errors that we routinely commit. Any experience we have, and any information we acquire, could be stored away reliably with an efficient cataloging system, so that we could recall it perfectly when necessary.
2/ Better internal awareness. Much of our current consciousness appears to occur at a sub-conscious level; we are apparently only aware of a fraction of the internal workings of our mind. Our much-vaunted self-awareness is like the visible part of an iceberg. If we could examine a greater part of our mentality at will, we might be able to avoid some of the mistakes we commonly make, and some of the delusions we routinely suffer from.
3/ A deeper level of understanding of other sentient beings. Humans have a respectable amount of empathy, and can communicate using language and in several other, interesting ways. What if we could communicate an arbitrary fraction of the content of our minds? If taken to an extreme, this could result in the formation of a group mind of some sort, either as a consciousness shared between several individuals, or as a separate overmind, perhaps housed in some sort of external processing substrate.
4/ Non-human behavioural traits. Humans seem to be innately biased to behave in certain ways; here is Donald Brown’s list of human universals, behavioural traits which seem to appear almost unversally in every society.
If we want to explore the universe of possible mental topologies, we might want to find out what it is like to behave in ways that are not remotely human.