HMHW, do biological entities in the only universe we are able to observe seem to be programmed to a fixed period of existence, or is it completely open-ended? Whatever the unknown variables, consciousness seems to be one of the important ones, and if enough of the higher consciences decided to switch themselves off, it could indeed have a disastrous effect on the whole. So, again I ask why would this be allowed in a simulation?
Well, yes. But barring any evidence for such “emergent counteragents”, the reasonable assumption is that there aren’t any.
Actually, the truth is something of the opposite; given time, it’s pretty much guaranteed that someone, somewhere will try to do anything that they have the ability to try. Especially when doing so will give you an edge, and having superhuman AI on your side when the other side doesn’t would certainly do that.
A few problems with that. First, by nature a self modifying program is going to be unpredictable, “free will” or not. Second, it’s virtually certain that someone, somewhere will create unconstrained AI, and there’s a very good chance that people ideologically opposed to the concept will hack your controls as well. Third, it may well be harder to create a conscious AI than an unconscious one - the fact that we are conscious implies that it’s the easiest state for an intelligence to be. Fourth, the AI lacking consciousness or “free will” ( or more scientifically, the ability to modify it’s own purpose beyond it’s initial programming and commands ) won’t make it safer - it just means it won’t care if a flaw in it’s design tells it to kill you, and that it won’t be able to decide that doing so is a bad idea and refrain from doing do. Fifth, competition may well make those restraints impractical, since it’s likely that the side in any conflicts/competitions/wars that gives it’s superhuman machines the most freedom and flexibility will win.
And finally - none of your ideas applies to a downloaded/augmented human.
You’re quite right, Der Trihs, but I wasn’t attempting to disprove both arguments, merely to show that they aren’t necessarily compelling – i.e. to show that there exist possible futures where no singularity ever will occur, and that there exist possible worlds where it isn’t more likely to live in a simulation.
Nevertheless, you can quite easily implement absolute boundaries.
If strong AI is possible (and I see no reason for it not to be), then yes, that is very probable, given enough time. It’s not certain, though.
Did you mean to say ‘harder to create an unconscious AI than a conscious one’? If so, then I don’t know about that – seems to me that consciousness is something that has to be explicitly added, and that an unconscious agent is very well able to act as intelligently as a conscious one. (As to why, then, we’re conscious? Beats me, and that’s a discussion for a different thread, it’ll probably just turn out to be a meme propagating utility in the same sense that life is a gene propagating utility.)
That’s basically the same as your first point, and it’s actually not exclusively applicable to AIs – a design flaw in an automatic door can kill people just as well, that’s why we generally implement safeties. The same would have to be done with AIs.
Well, unless their machines decide that they actually don’t need to slave away under human rulership and turn on them; so it may well be the case that too much freedom and flexibility is disfavoured as well.
Well, that didn’t figure into the discussion, did it? Personally, I can hardly wait for my cyborg body, the one I got now is a shoddy design at best.
How can you have a true AI that lacks consciousness? Seems a prerequisite. Lacking consciousness I think all you end up with is a clever program and not true AI (or rather true intelligence). True intelligence I would say requires a certain amount of creativity and adaptability and I think think that can only occur, in anything other than a random manner, with consciousness underpinning it. Lacking consciousness we may get a smart program that you could pose problems to for it to work on but it will not be self motivated to do so based on external stimuli.
That’s what I meant yes. It seems to me the fact that we, the first human level intelligence on Earth have consciousness implies that a conscious intelligence is the easiest kind to make. It seems likely to me that evolution took the path of least resistance.
It does; there’s no reason that runaway human augmentation couldn’t pull off a Singularity instead of runaway AI.
I’ve played this game, which despite its clunky interface is kind of fun the first couple times through. At any difficulty setting beyond Easy, however, it’s just ridiculous. The humans discover your research base on the ocean floor in spite of it being fully upgraded, and suspicion just snowballs till even a dormant survival pod in Antarctica won’t remain hidden.

Well, perhaps not ‘self’ motivated, lacking as it does a ‘self’ in the traditionally defined way, but it can certainly be motivated by external stimuli if it can be made such that you can pose problems for it to work on, since all interaction is merely taking an input and processing an output from it.
In other words, how could you determine from the outside whether something you are interacting with is conscious or not (as I said before, that may be a subject for a different thread)?
Well, there’s always a possibility for non-adaptive traits to exist if they do not actively cause detriment to a species, but, in this case, I’d wager that consciousness is an adaptive trait separate from intelligence, and one that might conceivably have arisen rather late in human evolution (i.e. later than intelligence).
That’d be a rather different singularity, though, since we’d be an integral part of it, if only in a sort of ‘post-human’ form.
I’d have thought you couldn’t have ‘intelligence’ without ‘consciousness’? Isn’t problem solving ability without it called instinct? Tell me I’m wrong again.
Maybe this needs another thread but I’ll continue here for the moment.
I’d say lacking a “self” (i.e. consciousness) the AI could only solve problems based on some predefined ruleset. Granted I suppose we could make that ruleset arbitrarily large and complex but it still boils down to an, If A then B process. A sense of self can expand out of that more easily as many other considerations become potentially relevant even if seemingly unrelated (e.g. Doing “A” is best for me but it will hurt “C” whom I care about so I will choose to do “B” instead).
Look, all you need to do is give your super AI a conflicting order or statement and it will go crazy and self destruct.
Yep…managed to disable the Ticonderoga Class Missile Cruiser USS Yorktown doing that:
First of all, it’s not immediately clear why instinct and intelligence ought to be fundamentally different concepts – further upthreat there’s a statement by one poster that attributes a certain kind of intelligence to a swarm of insects, who are surely purely instinct driven on an individual basis. I wouldn’t go that far in reducing intelligence to its component parts, seeing as how a swarm still lacks a certain kind of adaptive reasoning capacity one generally associates with intelligence, but the basic idea is similar.
But your example can perfectly well be resolved with a predefined ruleset if it includes disfavouring solutions that are harmful to other entities/goals. Indeed, I’d argue that even adaptive reasoning can be modelled in a similar way, it’d only take the ability to break down an unfamiliar problem into its component parts, which I suppose generally can be done algorithmically; genuinely new parts could be solved by trial-and-error, either carried out or simulated internally (which isn’t all that different from how we do it, anyway).
Furthermore, we are not fully conscious beings: a great many of our functions are being carried out without us (our ‘selves’) ever being aware of them, take, for instance, breathing. It’s a completely unconscious process, and yet it reacts intelligently to changes in the environment and/or our oxygen needs (and in doing so interacts with a whole host of other, similarly unconscious, processes).
Indeed, the evidence to date seems to point to the fact that our conscious, in the sense of self-awareness, only is notified of any given decision post-facto, as the research of for instance Benjamin Libet suggests.
I’ve seen the singularity used more to describe any future time in which the culture / way of existing as humans is so different that we can’t currently conceive of it.
And sure, why not. Culture and technology seem to be on an exponential curve of change, and something like a holodeck would probably completely change the way we live our lives. On the other hand, there’s always going to be luddites, and while some people will lose themselves in new tech or culture for awhile, if they start feeling too ungrounded they can always take a step back.
I have to agree. A technology “singularity” would not be some AI coming to life and destroying humanity. It will most likely be a point where technology is so ingrained into everything and everyone that it would be imposible to function in society without embracing it. Kind of like trying to travel cross country without a wallet, but worse.
Ridiculous. The rapture is a superstition belief that you can only believe if you discard all rational thought. The singularity is a speculation based on technological trends. Might happen, might not, but the singularity is at least POSSIBLE with advances in known technology.
I don’t know how to break this to you, Mangetout, but, um … flying cars actually were built.
Yes, they’re called planes and they already existed at the time flying cars were prognosticated.
Actual flying cars of the Jetsons variety, I mean. Yes, there’s the Moller Skycar, or there will be, Real Soon Now.
If reality is a computer simulation, where are the bugs?
Complex computer programs are (in all cases in our experience) somewhat prone to going wrong here and there. Simulated mice escape through gaps in maze corners, opened up by rounding anomalies, variables unexpectedly extended beyond their anticipated size click back to zero or overwrite sections of memory that don’t belong to them, and so on.
Where are the bugs in reality?
Also, we could assume that the software used to create the universe simulation does so by trial and error - using evolutionary algorithms to achieve a workable end result (or indeed any other process of repeated trial and error), in which case, we are much more likely in one of the probably numerous pre-release beta versions than the single, stable released version. So where are the bugs?
I still can’t quite put my finger on why these arguments grate so horribly with me, but I think it’s partly that they act astonished by facts they assumed into existence - 1) Assume unicorns exist. 2) Hey wow! Unicorns!
This simulation could be the end result of thousands of years of development by beings much more intelligent than us, so it is nearly bug free. Or maybe the bugs are things that humans experience that make no rational sense, like mass hallucinations. Or (most likely) the simulation is monitored closely enough that when a sentient being experiences a bug, it alerts the people maintaining the simulation, they fix the bug, then revert the simulation to an earlier point in time. I’ve experienced plenty of bugs in my Civilization IV games, but if the people in the game were self-aware, they’d have no memory of it happening, because I revert to an earlier save when one happens.
UFOs, Bigfoot, the continued popularity of “According to Jim” – if these don’t count as ‘bugs’ I do not know how a bug could be defined.