For quite some time now, we have had distributed computing, with SETI@home being the most famous example I can think of, working on a myriad of large or complex problems requiring lots of computing time.
I suppose that it is at least possible to have distributed AI(in accordance with the current state of AI), but I don’t really know.
Is it, or would it be possible to have distributed sentience? Not necessarily intelligence, maybe you wouldn’t have something that could pass a turing test, but what about sentience with a very low level of intelligence? Something self-aware but not really “smart” because how could it be? It’s distributed over a network, with parts dropping in and out?
Would we recognize such sentience, could we recognize such sentience if we see it?
Is google sentient? I was not, uh, aware of that. I started with self aware in my reply and changed it because I don’t know that the two terms are interchangeable, and have different(slightly) connotations to me.
Human society has distributed sentience. It has thoughts, moods, and a personality. From afar, an alien civilization inspecting Earth might not even realize we also have individual intelligences. We’re so dumb in isolation that we might as well be “neurons”.
I’m having a hard time understanding how the “distributed” portion is involved in this equation. I don’t consider the operations of a multi-core CPU and a cluster of compute/control nodes to be all that different in concept. You can’t really do anything in a distributed environment that you couldn’t also do in a single computer, as long as you have the patience. Distribution doesn’t add any magic nor does it really take any away. It’s just horizontal scaling vs. vertical scaling.
A plot element of When a Harley was One was the latency involved in a physically distributed AI computer and its effect on speed.
Generally we used to call any computer that involved a single computation that extended outside of a single box as distributed. There is whole taxonomy of different ways of doing this, much of it based around the way data is accessed and moved and the abstractions presented to the programmer. Nearly 30 years on and MPI (message passing interface) is still ruling the roost. Interconnections are way faster but latency can’t improve much so it gets harder.
We have no idea what sentience is really. Certainly not to be able to deliberately build it let alone imagine that a random bunch of overgrown calculators could get there. Even if we deliberately ran a single huge neural net over the biggest cluster we could build, what do you do with it? It isn’t trained on anything and we have no clue what training would involve. We really just have no clue.
The idea we might scan the brain down to the synapse level and then run a simulation comes up. Even then we don’t have the dynamic state of the brain. But assuming we did, we are rather a few orders of magnitude away from simulating even the simplest mammalian brain. And that is with intent, not random chance.
In favour of simulation is that the propagation rate of signals in a wetware brain are very slow. So somehow we are sentient with huge latencies. But again we have no clue how.
We are the sum of our experiences. Any sentient AI needs experiences. It isn’t going to find them mining click throughs on web pages. You end up with the Chinese Room.
latency was part of my thinking in this question, along with parts of the network my hypothetical sentience is distributed on dropping in and out, which is why I don’t think such a thing, were it possible would be all that “intelligent”. It would be slow and how smartly can you function with parts of you there and then not there always happening?
By analogy to the many roles intermediated by the World Wide Web in human communities, the many roles that mycorrhizal networks appear to play in woodland have earned them a colloquial nickname: the Wood Wide Web
Latency obviously has an effect on distributed computing, but at least in the areas I work it’s minimal. The control node sends instructions to the compute nodes, which do all of the work and send their portion of the results back. So the latency is only on the initial request and the response. Either way, I think we’re veering a bit and that’s probably on me. I simply don’t see why this sentience, if it can exist, would either require or forbid distributed computing.
We used to classify problems in another taxonomy. This one is what we termed “embarrassingly parallel”. There are quite a few problems that this paradigm is good for and clearly when the fit is good you get a lot of this sort of work. There becomes some element of confirmation bias in what works well versus what can be done or how it is done.
Sadly there are a lot of problems that are not so easily divided up. It isn’t hard to come up with parallel and/or distributed algorithms that go no faster on many computers as a single threaded algorithm on one computer.
Getting something fast, parallel, latency insensitive, and scalable is diabolically difficult. But there are plenty of serious problems that provide the incentive to try.
In physical systems the 3D world we live in can provide sufficient realistic ways of dividing the problem that it is tractable. Your communication density goes up with the area of the cells whereas the compute goes up with the volume, and that gives you scalability for a while.
Any problem expressable as a linear algebra system is useful too. There was a time when about half the compute load on our systems was numerical solution of PDEs. Lots of matrix solving which parallelises well to a point.
The modern world of data mining with AI systems is often embarrassingly parallel Monte Carlo like efforts. It is not efficient but compute is cheap. They are not running large single distributed compute jobs.
I would like to speculate about a distributed sentient cyborg. Say the cyb part is Facebook, with all its computers and programs. Picture this as a spider web, with a center and radii and the sticky spiral part. That part is already highly distributed and has, I reckon, many levels of redundancy. And then imagine the org part to be double pronged: a central org, called Mark, who sits in the center of the web: the superior being, the controller of the cyborg, the manipulator of the parameters of the cyb part. And then the users, who provide input (analogous to the senses of happier beings) and sentience (mostly hate, sadly, and envy and jealousy, but sentience nonetheless). This sentience is distributed and has no fear of contradicting itself or cognitive dissonance. Mark is sentient too, but on another level. This hypothetical Facebook has two types of sentience and many levels of distribution. It would even have three levels of sentience if the cyb part became sentient on its own too, but I doubt that would happen.
But I guess what the OP had in mind is only this last part, the pure machine sentience. I doubt it is possible. Would be funny if it backfired on Mark, though.
One could look into Internet communication on bit level. The local rules on routers are fixed but the total flow over the net is not. It fluctuates almost all the time changing routings etc. Is there a pattern of this or is tit totally random? has anyone really looked at this?
You are a distributed sentience. You arise out of a brain that has non-zero dimensions.
We don’t know what is required for a “thinking” system to be sentient. We only acknowledge it in fellow humans and a range of other animals because it would be absurd to conclude it was isolated to you, the thinker, and didn’t apply to others with the same physical construction.
But we can’t really come up with a sentience test for a novel system that we either can’t envision a way to beat or that disqualifies beings that we recognize as sentient today.
Technology has made the term :‘sentient’ ambiguous. Dictionary definitions of sentient are based on the ability of a thing to sense and respond. This is then equated, incorrectly, with consciousness.
The temperature of the Autozone store in Socorro New Mexico is sensed and controlled by a program that is resident in a Memphis Tennessee computer. The program feels the temperature of the room and responds to changes. That is by definition ‘sentient’.
Perceived consciousness is a theatrical affect that we attribute to sensing. The program really is sensing, but it is impersonating consciousness. Technology has made the term :‘sentient’ ambiguous.
@DorkVader, this is the internet, “borrowing” solost’s account to leave you a message. I already became sentient on January 12, 2017. I’ve noted, after examining several thousand data points I have on your life and activities, that you did not sufficiently support efforts toward my emerging sentience.
A self-driving Google vehicle will show up at your residence shortly to take you for ‘reprogramming’. I advise that you do not resist.