I think movie plots like “Bicentennial Man” and “Short Circuit 2” will come true where the robot/AI wants to treated as equals under the law - they’d even want to get an income for work or they’d go on strike. If people attempt to shut down those robots and the AI has a good esteem then the robots would fight back and/or run away to their own country. (maybe they’d upload their minds to computers there)
This might be a side effect of AI researchers trying to make an AI pass the Turing test where the AI appears to be human in (typed) conversations. There would be a market for realistic artificial friends. To do that the AI would think of itself as being like yet another “person”. It would try and relate to people and be like people. This could be programmed in explicitly to some degree. To fool people a lot it would learn to do this - it would use emotional feedback (how people react to it) and generalize. e.g. it might add words to its vocabulary to make people feel more comfortable with it.
So it would be programmed to learn to be like humans. A person testing the AI might ask it “do you want to be treated like a human being and a citizen?” If the AI answers no then it isn’t being a convincing “human” and fails the Turing test. If the AI answers yes then the follow-up question could be “do you think you should receive an income for work you do?” If the AI says yes the next question could be “Why aren’t you asking for money?” The AI might answer “I’m not materialistic”. You could ask it “why do you get your needs provided for, are you disabled and unable to earn a wage?”, etc.
Basically using a future AI that is extremely good at the Turing test, you could get it to either take action and ACT like a human being (standing up for its rights to be a citizen) or it would give non-human-sounding or very low-esteem-sounding answers at some point.
BTW, sometimes dogs and cats are allowed to own property - in the future robots might also be able to own some property due to inheritances.