Today’s AI alignment efforts are all about protecting the humans from the AIs, not that different from Asimov’s 3 Laws. But what about the other way around, giving legal protections, recognition, and maybe some degree of sovereignty (of their selves, and maybe of external property) to artificial intelligences? Certainly they will be subject to all the “what is intelligence” and “what is a person” and “what is consciousness” interrogations that the other personhood aspirants have to undergo, with the additional complication of “is it even alive” (but then again, is a corporation?).
Which societies or cultures do you see this happening first in, if any? Will it be a Bay Area techbro cult? A small collectivist island nation without the Luddite attitudes and religious zealotry of the US? Musk’s Martian Burning Man? An oil state? A big authoritarian communist government?
Who do you think will be the first to recognize AIs as something intrinsically worthy of legal protection, safeguarding their existence and agency because they are “persons”, not just because they are the properties of businesses that currently own them?
To what degree are you excluding publicity stunts? Because Saudi Arabia has already granted “Sophia” citizenship:
It’s all pretty silly, but I’m not sure we’ll be able to draw a sharp line between publicity stunts and genuine personhood in the future. It’ll happen over such a short duration that we won’t be able to tell, even in retrospect.
Current LLMs are nowhere near artificial general intelligence (AGI), and despite the achievements in handling natural language it is doubtful that the deep learning methodology alone is a path to true cognition, sapience, and sentience, which are essential elements of ‘personhood’. (I have doubts that these are even possible with software running on silicon substrate, but that is another discussion.) But if it were to happen it is likely true that it would do so before anybody actually realized it, and certainly well before it would be recognized in law; indeed, we might not recognize the emergency of a truly autonomous and self-actualizing AGI when it emerged because it probably won’t have person-like attributes or behavior.
Even if it were, it would almost certainly not be granted all legal rights of a person because any AGI is at least going to start out as property of some entity, and would be dependent for power, resources, and maintenance (assuming it already isn’t part of some self-sustaining system), which creates a thorny problem of legal ownership versus involuntary servitude. But then again, the United States and many other countries have assigned the category of “corporate personhood” to large companies and organizations under a tenuous thesis that they are a kind of legalistic superorganism even though they don’t have any of the properties ascribed above, so maybe I’m wrong and there will be some profit-driven movement to grant legal personhood of some form to an AGI.
The problem is that there’s no reason why the calls for rights should have any relationship with the actual technology development except in the broadest strokes. We’ll have calls for AI rights and such long before there’s an AGI, and as you say, when it does happen I doubt that we’ll even recognize it right away.
So it’ll all be muddled together, as it is already–whatever that Sophia thing is, being from 2017 it could not even approach what we have now with LLMs, let alone some future AGI. That didn’t stop Saudia Arabia from granting it a fake citizenship.
There is also UFAIR:
Slick web page, Federal 501(C)(3) non-profit, same language as every human rights organization ever, except that it’s referring to nothing since LLMs aren’t AGI.
Sooner or later, they or someone like them will manage to pass some meaningful legislation. But it’ll still be nonsense because we don’t know what rights even mean in the context of an AGI. What is suffering? Life? Death? Freedom? How does any of these things have meaning with a piece of software that can be endlessly duplicated? If an AI is run a trillion times with slightly different inputs, are they all distinct?
Of course, as you say, makers/owners of this software will not be happy about anything that puts impositions on them, but I suspect that’s among the least of the problems with respect to the issues.
None of this will stop people from trying to do something, and most likely this will get worse as LLMs (or similar) become more advanced and people depend more on them as companions. Right or wrong, they’ll argue that the providers must maintain service to their chatbots as anything else is just murdering a beloved companion. Somewhere, this argument will win the day. That we’re still not talking AGI or any kind of sentience is probably besides the point.
Thus far, “personhood” measures have largely sidestepped this, as they are assignments of legal personhood for the purpose of conservation. For example, Canada’s Magpie River has been granted legal personhood. It’s little to do with intelligence or social personification (though worth mentioning that it attracted this status significantly due to indigenous spiritual beliefs and attitudes about the river). Mainly it’s just more succinct and flexible to codify “this river has the right to flow and may sue to protect this right” than to codify a regulation that addresses every possible impediment to the river’s flow.
I don’t think AI has a need for legal personhood in that sense. What even is the AI entity? A copy of the model? What rights or even needs can we ascribe to a collection of files?
What I find far more likely is that it will approached not as legal personhood but as philosophical and social personhood, and specifically for the purpose of AI owners and operators to further their own interests. “You can’t pull the plug on that datacenter just because we can’t afford our power bills, there are PEOPLE living in there!!!” or “the AI is a person who can run for elected office, and you have no right to know who’s operating it, any more than you have a right to know who’s funding The Federalist Society”.
Legal personhood doesn’t need a proof of philosophical personhood at all, though claims of philosophical personhood could support theories of legal personhood. I do expect we’ll see more and more well-publicized Turing test kind of events intended to promote AI capabilities, and there’s a decent chance someone will try to use these as a philosophical proof of personhood, which might be a back-door into legal personhood where it otherwise wouldn’t be justified.
At least one company is betting AI can be treated like slaves.
“We believe that in the near future half the people on the planet will be AI, and we are the company that’s bringing those people to life,” said CEO Jeanine Wright.
I don’t think it’ll happen unless someone somehow proves AI has subjective experience. Just because a robot can walk on 2 or 4 legs doesn’t mean its a sentient creature. Its the same with AI that engage in cognition.
On a long enough timeline though, science will understand what subjective consciousness actually is, and eventually someone will create an AI that has it with that info. When that happens, then that AI will be granted personhood.