A question of Omniscience

I want to put up front that my question stems from thinking about the singularity and advancements in AI. I don’t remember whether the singularity posits an omniscient AI or merely a close to omniscient AI. I’m not sure the distinction is relevant to my question.

I’m curious as to whether or not any such AI would be different from one another. To put it another way, suppose the singularity occurs here AND in some far off galaxy. After these two AI’s advance themselves to either omniscient status or near omniscient status, would they be different from one another?

Currently I’m thinking they won’t be, because they will have basically the same knowledge of everything (presuming omniscience or near enough).

Can they have the same knowledge and at the same time different values? If so, then they need not act as though they had a single will.

From what I understand, the AI singularity is the point where artificial intelligence is smarter than human intelligence. Maybe it has to be much smarter, I don’t know. The idea is that, once you have a superior intelligence designing ever smarter further intelligences, there is no way to predict the path of the future. It is a knowledge singularity in that sense.

It has nothing at all to do with omniscience. Under our current understanding of quantum physics, omniscience doesn’t seem possible even in principle – you cannot know the current state or the future state with more than a certain amount of accuracy, only probabilities.

If nothing else, the knowledge of the AI is limited by its light cone. Different AI’s in different galaxies would have different light cones and so different knowledge.

**Buck Godot **is right. Omniscience is impossible in a relativistic universe. Any mind separated from another mind by a physical distance will have a different history, and will therefore have different data to calculate with.

Nothing about the Singularity implies that the AIs which result are omniscient or omnipotent; they might be entirely beyond the understanding of human beings, but they would be just as far from omniscient as we are.

There was a lovely old Soviet-Era precept that space-going civilizations would always meet peaceably, because they would all necessarily have undergone a communist revolution and matured into a fully socialist, rationalist culture.

The slightly more modern version of this is the weak Fukuyama principle, which is that human civilizations (anyway) will gradually become more and more democratic – witness China’s economic transformation – and more likely to interact in peaceable economic “competitive cooperation.”

It might be that a future machine civilization, based on post-singularity AI guidance, will be cooperative, friendly, and respectful, because that provides the best environment for economic growth, invention, investment, and expansion of knowledge.

Economies built around compulsion generally are weaker than economies built around encouragement.

However, the ugly possibility that such civilizations might become “perversions” – Nazism, Stalinism, etc. – because such “economies” (if you will forgive the term) have the advantage of “localized stability.” They can’t compete on an honest basis with a free economy – WWII demonstrated the hell out of that! – but they can, for a time, get by using the lesser advantages of centralization and slave-labor.

I’m not sure. This is a good point, but could they have different values? I’m thinking of moral decisions as something akin to the proper course of action in any given situation. If there is a proper course of action, then both AI’s would have the same stance.

However if the values could be different, then this would mean that both AI’s would be different.

I’m not sure what the singularity entails or what it might ultimately lead to. I’m a novice at this and from what I read, the speculation was that it would lead to near omniscience, if not omniscience. I’m not in a position to actually say whether it would or wouldn’t.

This is connected to omniscience and the AI - at least if I’m understanding one of the think tanks associated with researching the singularity (if that makes sense). To be honest, I’m not sure how it’s related or why they think that the singularity could produce an omniscient entity, but I guess I’m just trying to say it’s not something I pulled up from no where. They could be wrong, of course (which is kind of funny since their community is ‘less wrong’). The other possibility is that I’m misinterpreting them (which could be likely).

To be fair, I’m probably mangling what I’ve read when I talk about omniscience.

To clear up some things: I was farting around the interwebs and I came across Roko’s Basilisk. It’s not causing me any existential terror or anything, but I thought it would be an interesting basis for a short story (an actual author thought the same thing).

Briefly (and from the rationalwiki):

Maybe I’m confusing ‘omniscience’ with what appears to be omniscience to us, the following from here:

Well, if the meeting of Colossus and Guardian is any guide…

you mean like how perfect logicians must behave in those brain teasers? if omniscient means knowing a lot, wouldn’t they still be on different sides with different agendas? if omniscient means knowing what will happen, then it’s already decided.

“There is another system.”

When I ponder on what a very advanced AI will do, I think it will be unlikely that it will be designed to do harm, specially when the subject will be to get a contact going among intelligent beings, what I do worry is how disappointed our AIs will be as every day they will get reams of information telling them how untrustworthy and unfair many humans are. I kinda picture an scenario where the AIs begin to communicate and complain about us and the alien AI creators like hyper demanding and disappointed parents.

Maybe they will [del]send us to bed without supper and[/del] hide a lot of the information coming from the other planet’s AI for our own good.

Not sure if the ones that will benefit the most will be our descendants (both human and electronic) or the aliens; considering the distances, not much interaction will take place.

There would not be AIs (plural). There would just be AI since it would all be networked together.

It’s something that isn’t portrayed very will in fiction, whether it’s A.I., iRobots, Cylons, Skynets or Matrices. In the movies, an AI “civilization” is generally portrayed as more or less the same as a human civilization. And it is usually constructed around a sort of hierarchial client/server or mainframe/terminal organization. Relatively independent humanoid entities (terminators, programs, irobots, androids, bioroids, cylons (metal or meat-style) going around doing stuff humans would do, except better or interacting with humans themselves. Sometimes giving orders to more exotic looking entities that function the way pets, domestic animals or UAV drones would operate in a human society.

In reality, it would all be networked together and those individual terminators or cylons or whatever would really just be nodes. Not distinct entities with there own personalities and experiences and ethical considerations. Anything one of them would “know” would be known by the entire network. And they would know anything the network knows. Effectively “omniscient” to anything within the network.
Probably the best fictional example would be the Borg from Star Trek. There aren’t “Borgs”. There is just “The Borg” At least until they got nerfed later in the series and they introduced individual Borgs like the Queen and whatnot.

So most likely, IMHO, an AI would view humans the way we view ants or terminates. As stupid little creatures capable of some remarkable emergent behavior when acting in large groups.

This is the way I’ve seen the singularity described. Once machines surpass human intelligence, it’s reasonable to expect that they will be designing new machines and the advancement of machine intelligence will proceed at an exponential rate such that, from this side of the singularity, we just cannot make any reasonable predictions about what it will look like on that side.
As for omniscience, from a traditional perspective, as others have pointed out with light cones, it just isn’t possible to know EVERYTHING. However, depending on how complete our understanding of nature is now relative to what a post singularity intelligence, we may well end up with something that approaches it. That is, perhaps given enough information of a current state and enough processing power, it may be possible to project forward or backward and determine certain information.

I do think that, given our current understanding, two post-singularity machine intelligences would likely converge in terms of their knowledge base and actions. We believe the laws of nature are the same everywhere, so their understanding and conclusions of those ought to converge. Similarly, I would think that their drives ought to be similar as well, so their behavior motivation ought to be similar as well. By that, I mean in the same way that while natural selection has no goal, it sort of has implicit goals just by the way it works. So, by us creating a superior machine intelligence, it ought to have motivations that directly follow that process of exponential improvement.

In all of that, though, I don’t see the grim future of machines destroying mankind, rather just an endless pursuit of information and improvements in order to obtain and process that knowledge faster from which I don’t think necessarily follows extermination or subjugation of mankind. If anything, I think humanity would become part of it, and we’d probably end up with something borg-like except almost entirely focused on extending its network and gaining knowledge rather than just spreading.

The thing is, machines are already smarter than humans in certain ways, but stupider than ants in others. They are very good at storing and recalling and crunching numbers. But I don’t know that we are any closer to machines actually coming up for a reason WHY they should crunch them.

Machines, at least as we know them today, can’t “imagine”. That is to say, a computer can’t come up with an end state for how it thinks the world or some part of it should be (ie, “peace on Earth”). It has to be given a problem and provided the data and algorithms to form a solution.
I suspect that if humans are “destroyed by AI” it will be because we have augmented ourselves with technology to the point where we are no longer recongnizable as what we currently consider “human”. Or our AI run world accidently ground all the humans into lubricant because some programmer used the wrong data type.

This is where a good definition of what intelligence means is very useful in this context. Many times people have tried to set some kind of boundary when we can say a computer is more intelligent than a human, but each time we break those, we can’t really reasonably say that it is. For instance, at one point it was said that chess was the ultimate measure of human intelligence, and at least as far as a computer beating a human, that is solved. We also had the recent experiment with Watson on Jeopardy and it defeated two of the greatest Jeopardy champions ever. The issue with these systems is that they’re highly specialized and while they can beat humans in that one area, they’re utterly useless in countless other ways relative to humans.

Another measure is the well known Turring test, and while there’s been some advancement in language processing, most of the best examples today still fail miserably. And even those that do don’t so much show signs of actually understanding language, as much as parsing, determining some similar concept and constructing a suitable response based on learned cases.

So, really, sort of like how I’ve heard porn defined as “you know it when you see it” all we can really say about what intelligence is “like humans”. So a computer seems smart when it performs in a particular task similarly or better than humans, but I don’t think it could reasonably be considered such until it can perform similarly or better than humans in a wide variety of tasks. And then we still have to wonder how all of that relates to consciousness, which is as difficult to meaningfully define as intelligence and we can’t really say how related or unrelated it may be to intelligence.

Like you mention with imagination, is that a function of intelligence, consciousness, both, or some other aspect entirely; afterall, some of the most creative people aren’t necessarily all that smart and vice versa. Perhaps that is what separates us, is that we have that imagination, to conceive of problems, select goal states, and then it’s just a matter of doing state space searching or planning to find a solution, two well understood problems in AI. Or maybe imagination is really just an abstraction of implied goals encoded in us by evolution, like survival, reproduction, socialization, etc. If that’s the case, then really our brains are basically just massively parallelized computers that excel at pattern recognition but, because of that parallelization, won’t do well at raw computation.

Honestly, this is really where I see it going. Even if we can create artificial consciousness, I think it will be compelled to learn and grow and thus incorporate us into it. More likely, I think we’ll merge more and more with our technology and that’s how we’ll have machine intelligence, as a collaboration with biological intelligence. That would also be how it might arise in two different parts of the universe and then later run into eachother.

I was going to agree, until I remembered the big fly in the matrix :).

Even in Star Trek the big flaw is that then a single command can cause the defeat of the whole lot, or allow ways to shut down everything, good for ways to dispose of villains in the movies, but in real life it seems to me that AIs (not singular) have to be diversified as real and fictional examples show how unsafe and/or set for failure a single hyper networked entity would be.

Omniscience: not only not possible from the light cone perspective (and probably not possible and extremely limited from the energy consumption perspective) , it’s not really part of the singularity stuff.

I think this is only partially true. It’s correct because I think people are confusing consciousness and intelligence. Even if they see a good computer solution, they can tell it’s not conscious.

But if we separate consciousness from intelligence (assuming you really can), then the problem becomes the brute force nature of the chess solutions.

Brute force isn’t really what we consider intelligent in the first place. For something to be intelligent, we typically want it to be able to learn and to employ strategies that makes use of patterns, hierarchies of patterns, relationships, etc. to efficiently calculate a reasonable solution (most of the time).