Do you think the singularity is real? and will it be the end of the world?

We can’t compare our intelligence and awareness to evolution as evolution is a natural process that has neither intelligence or awareness.

Abiogenesis is still a mystery, but let’s say that life originated through a random combination of amino acids and other chemicals in some seafoam in a puddle on some beach in Earth’s prehistory. We have more intelligence and awareness than a puddle of much, but that does not mean that we can, with certainty, create living beings out of sea muck.

There are things that are possible through random natural processes that we may not be capable of achieving as intelligent animals. The development of true intelligence MAY be one of those things.

I didn’t mean to imply that the continuing increase of intelligence would go on infinitely. I said elsewhere in this thread that there are physical limits on intelligence that prevent omniscience. I do think that if we made something smarter than ourselves, then it would be possible to make beings smarter than itself in ways that we cannot understand - because it’s smarter than us! But there are physical limitations to intelligence and I’m sure a limit would be reached. That limit might be when a solar system-sized hyperdense quantum computer gets too big to maintain it’s structure due to it’s collapsing into a black hole, though, and that would probably be enough beyond us that it could easily fit the singularity scenario.

But both are problem solving & computational techniques.

Yes, it does mean that we can. We have access to the same elements that nature had, and the existence of life is proof that it’s possible. As I said, it’s just like flight; the existence of birds proved it could be done, long before we could pull itself off ourselves.

And do you have an example of such a thing ?

Something, mind you, that unlike life is not beyond us due to the sheer mass, energy or scale involved, as building a galaxy would be. Life, ultimately, is just an extremely complex mass of perfectly ordinary matter operating at energies we can easily handle.

I can’t quite put my finger on the reason why, but I get the same feeling about these extrapolative arguments as I do when someone plays me at the shell game. The result depends on something small, but incredibly significant being concealed.

Maybe, maybe not. Hilary Putnam wrote a paper called “Are Robots Conscious” in which he argued that, if a machine could pass the Turing Test, it would be arguably conscious, which is a sufficient condition for granting it rights. So if we achieve human level A.I., pretty much by definition we’ve created something we can’t enslave. And on a practical level, with rights being extended past humanity and into the animal kingdom, the progression of moral inclusivity argues that, at best, A.I. would have a period of slavery followed by emancipation.

Turn it around though, and imagine an A.I. that didn’t want those rights, that was happy being a slave, either because it was explicitly programmed not to, or failed to develop the sense of self, individualism, and integrity that demands those rights. In what sense, then, is it a human-like A.I.?

Any improvement on the success of a human A.I. gets into very murky conceptual territory. If we make a human A.I. that’s simply faster at thinking and better at memorizing, all we’ve done is put an apparently conscious face on things that computers already do for us, and I see no particular economic benefit to making it conscious.

There’s a chicken and egg problem to improving on human-level A.I. To paraphrase Gordon R. Dickson’s point in Dorsai, could a congress of apes build a super-ape that wasn’t just a stronger, faster version of themselves? Could they reject that line of development and choose superior usage of tools and greater brainpower at the expense of muscle mass?

Like walking on Mars, we know in a deep sense that flight is possible and what sort of thing it involves. And your point about the fact that we exist demonstrating that consciousness is possible in purely materialistic terms is a good one.

However, what’s conceptually opaque is the idea that consciousness is possible in some material that isn’t a brain. If we grew a brain in a vat and it was conscious, we wouldn’t say that we achieved A.I., we’d say we achieved growing a brain by other means.

Much of my scepticism in this comes from the continual failure of computability models to advance A.I. significantly. Given its obvious limitations, it appears to be basically the wrong medium in which consciousness might arise.

I think that was a good point, 700,000 years ago that is. I do think the discovery and use of fire was one example of something that did drive our later evolution and it was the result of intelligence and awareness in a natural feedback process.

I do know that currently there is a race with several labs around the world to be the first into creating artificial life, when one takes the advances computer power into account, I do think that this race will be completed in a few years.

I’m inclined to think that we are capable.

To put my economic argument against A.I. a little differently:

Humans were successful in evolutionary terms because we were a good blend of features: intelligence combined with opposable thumbs combined with bipedalism, etc. In the brain we see the same thing: memory, perception, reasoning… but none of them perfect. There’s evidence that our imperfect cognition is a benefit to us: an imperfect memory keeps our brain from getting full of information; imperfect reasoning allows for leaps of logic and intuition.

Technology has benefitted us because it extends limitations by other means. We don’t research super-powerful muscles; we build cranes. We don’t research super-fast running, we invent cars. We don’t research ‘mentats’, we invent computers to carry out calculations far faster than we can.

Inventing human-like A.I., with the hope of improving it, is like researching changes to our bones and musculature so we can fly, rather than inventing airplanes. We exceed ourselves to the extent that we jettison ourselves from the equation.

Why do we need to start with an AI that is equal to or more intelligent than humans?

We could start with something 1/2 as intelligent as a human (or maybe 1/1000), fire off a simulation of evolution from that point and end up with something well beyond human, right? We just need to calculate the optimal starting point given our available computing power and economic resources.

So, a few questions:

  1. By what year do you want to reach the singularity?
  2. How much money are you willing to spend on computing power?
  3. Are you certain that there aren’t very real constraints of energy and information capacity (all of the atoms of our planet can only represent a finite amount of information)?

I’m not sure that that stands in the way of the singularity, or of AI, either. AI doesn’t have to be about developing a human-like intelligence. To paraphrase the AI text I had in school: “humans did not create machines that fly by creating mechanical replicas of birds; we will not create machines that think by creating computerized replicas of our brains.”

Plus, I think it’s quite short-sighted to claim that there’s no economic benefit to a computerized human level of intelligence. Humans are expensive. We take years of preparation and training to be useful thinkers. We use inefficient and expensive chemical fuel. Humans are fragile. We require a much narrower range of temperatures than computers, specific gasses and pressures. There are huge economic benefits to replacing all kinds of human labor with machines.

It’s like a human subjected to or raised by very good brainwashing techniques. If I raised a bunch of human children and brainwashed them to the point of being happy slaves, they’d still be slaves. I do see something like your argument being used in the future to justify slavery of AIs, which is why I mentioned that they wouldn’t be called slaves.

Um; a human level mind that thinks a thousand times faster can do a thousand years worth of research in a year. You can’t see any uses for that ? And as pointed out, they can be made far tougher than we can, among other benefits.

As I said, evolution managed to pull it off, and it’s less creative than we are, less capable of major leaps in design.

Not really; there’s no reason to consider consciousness all that special, or brain tissue either. This claim, and I’ve heard it numerous times, that there’s something special about living tissue that only it can support consciousness smacks of a return to vitalism.

It seems more likely to me that the architecture is wrong - brains are massively parallel, after all - and the fact that we have little knowledge of the brain’s “software”.

Evolution can’t be seen as a problem solving process or computational, as it has no goal or guidance. It’s not trying to solve a problem. It’s just the natural result of random events passing through a filter. Life evolved for hundreds of millions of years without creating intelligence as humans have, and it’s not because it took that long to build up to humans, it’s just that the right random events didn’t happen in the right order until recently.

If we have no idea how life was formed, how can we begin to suppose that we can duplicate the process that created it? We aren’t sure even if abiogenesis occurred on Earth - for all we know, life is formed in some kind of conditions that aren’t duplicated on Earth and just fell to Earth and adapted to it’s environment - panspermia is in no way off the table in the abiogenesis debate.

Even if it did occur naturally on Earth, we have no idea how it could have happened. We have observed evolution, we have never observed life being created from non-living material. For all we know, the processes that lead to life being created are so highly improbable that it only happens once in a 100 billion years and it’s formation on Earth only 8 or 9 billion years after the Big Bang was extremely unlikely, but since we are it, it seems normal to us. This could be an explanation for the Fermi Paradox.

If the first self-reproducing material only happened because of a series of several extremely unlikely things happening purely through chance, we will never duplicate it because we can never know how it happened the first time. It’s like saying “Because we know someone rolled a 6 a million times in a row on an unloaded die at some point in history, we inevitably will be able to do the same thing.” It’s not a valid assumption. Not all possible things are duplicable.

And that’s how it solves problems. Specifically, the problems of genes getting themselves into the next generation. It’s not consciously designed to do that of course, but that is what it does. It doesn’t know that it’s solving the problem of, say, how to more efficiently airstream a bird species, nor does it intend to do that or anything else, but that’s what it does.

We don’t need to, any more than we needed to grow feathers to fly. We can come up with our own techniques.

It seems to me that arguments of the type of the singularity and simulation argument (what Mangetout aptly called extrapolative arguments) have two underlying implicit assumptions in common:

  1. Extrapolatability: The assumption that whatever is being discussed can be extrapolated essentially to infinity (or at least a substantial degree further) – the singularity assumes that intelligence (vastly) greater than ours is possible, and the simulation argument assumes that it is possible to form a near-limitless space of simulations, and while either seems reasonable and even likely, neither has been shown conclusively as far as I know. In both cases, there may exist emergent counteragents that are as yet undetected.

  2. Necessity: Basically, just because it’s possible doesn’t mean it’ll ever happen. Certainly not in a finite universe (I’ll leave the infinities to those more versed in this sort of arguments). Thus, even if it was in principle possible to create AIs able to create better AIs, there’s no necessity for us to do that, as well as there’s no necessity for the first generation of AIs to create the second; similarly, just because the theoretical possibility for nearly limitless simulation exists, doesn’t mean it’s necessary to utilize it.

Furthermore, the singularity also seems to utilize a popular equivocation between intelligence, consciousness and free will, which appear to be quite distinct concepts – I don’t believe that any intelligent being is necessarily conscious, and I believe that free will is a logical impossibility, and thus one can quite without any fears of cybernetic armageddon program ever-more intelligent AIs (and have them program themselves, if possible), since, with the AIs lacking consciousness and more importantly free will, one would be in control of this process the whole time.

And the simulation argument is just solipsism in a 21st century guise anyway, so there. :stuck_out_tongue:

Why would a simulation allow suicides?

:confused: Why does god need a spaceship?

(Seriously, though – why wouldn’t it?)

Well, if the entities were programmed to exist barring ‘natural accidents’, wouldn’t an entity taking it’s own life subvert the program? If it had no measurable effect on the whole, why would it be included in the program?

Why should the entities be programmed to exist barring natural accidents? Why wouldn’t it have a measurable effect?
You can’t prove anything about the nature of the simulation by making assumptions about the nature of the simulation.

Seeing as we are talking about a completely hypothetical scenario, I don’t think anyone is going to be proving anything. All I’m saying is, what possible purpose would suicide have in something designed by an intelligent being? It seems to be a counter-productive, nay, even dangerous, design flaw to have built into the main entities of your programmed simulation, don’t you think?

Not really.

My dog is a conscious, self aware entity. Her consciousness is not a human consciousness. Heck, I would say even a fly is conscious at some low level but certainly not a human consciousness.

I think you are over complicating consciousness. Yes it is a slippery, difficult to define concept but as you said you know it when you see it. An AI consciousness would not necessarily conform to a human model of consciousness and even if we built one explicitly to act as such is there any reason it would have to remain that way? Sort of the idea of a superfast evolving AI. Once it becomes self aware the AI, being a computer, would be free to explore all sorts of new evolutionary possibilities for itself.

As for an emergent AI admittedly it is a shot in the dark and we have no idea how it would work or why. But we do see emergent intelligence in the real world (e.g. ant/bee colonies). I am not sure anyone has ever explained it but it is there and does exist. I do not mean to wave a magic wand nad say it will happen or is even possible but the world can be a mysterious place and I am not sure you can take the notion off the table merely because you cannot adequately describe a mechanism for it.

No, not at all. You’re making unwarranted assumptions, and coming to unsupported conclusions. For starters, we might not even be necessarily the main entities of the simulation – we might well be just an emergent property of it, a mere side effect of a simulation designed for a completely different and unknown purpose.
Even if we were all this simulation exists for, what harm would suicide be? Seems to me to be a necessary element of simulating us, since we are definitely capable of killing ourselves; a simulation excluding that possibility wouldn’t be a simulation of us, simply.
Besides, if this is merely a simulation, who’s to say there isn’t a simulated afterlife as well, which those that kill themselves merely enter faster than the others?
Both your assumptions about the simulation’s goals and your assertions of suicide as being undesirable are flawed.