We don’t know how possible non-human intelligence will behave and/or believe. We only have ourselves to model the question on. So suppose we want to ask what is the likelihood that non-human intelligence will share human-ish views regarding being on the losing end of bondage. We take out a bag and into the bag we put one red marble and one white marble representing human-ish attitude and decidedly not human-ish attitude, repsectively. Why one ball for each? The principle of indifference. Then we put in one red ball representing our single observation. Now we pick a ball. The odds are 2 to 1 that it will be red; i.e., there is about a 66.6% chance that non-human intelligence won’t be too happy about being in chains. For more on the calculation you can check out Gerd Gigerenzer’s “Calculated Risks”.
Intelligence is a universalist trait, at least it appears to be. You may be able to program a protocol machine that can know the table settings for 6 million cultures, but that don’t make it smart. I’m talking about intelligence, not processing power or database design. If you’re creating intelligent machines, then you will want to program in different skill sets and attach different hardware, but you’ll still have an intelligent machine. And it will still be advantageous to have a machine that can adapt and take on new tasks when needed.
I’ll probably be misusing the R2 unit in what follows, please forgive me.
Suppose I’m a small scale moisture farmer. I’ll need a R2 unit and a C3 unit. Of course, I’ll only need a C3 unit when negotiating in market transactions. It doesn’t make sense for me to buy two units at, let’s say, a dollar a piece. Instead, I’d be better off buying a unit that I can use for R2 work 90% of the time and C3 work 10% of the time for, oh let’s say, $1.50. There’s millions of small operators like me in my star system, billions in the galaxy. That’s a big market for universal robots.
Suppose instead I’m running Mom’s Robot Corp. (MRC). Since I’m using hundreds of thousands of R2 units and thousands of C3 units, it might pay for me to buy specialized units. But it is not obvious that it is more cost effective to do so. The more flexibility I have in my workforce the more productive and low cost I will be. If suddenly a heretofore hidden cloning planet is discovered, I may suddenly need a few extra thousand C3 units to negotiate and set-up operations with the new planet. Once the operation is established and the cloners are trained to my specs, I’ll need fewer C3 units, but as operations grow I’ll need more R2 units. The cost effective way to handle that will be to upload the R2 skill set to some of the C3 units and switch out some of the hardware. Scrapping or moth-balling capital is not cheap, both in terms of cost to store and in terms of opportunities lost. It seems more than reasonable that for a large scale operation the flexibility inherent in a universal machine will make them, by in large, the desired type of machine.
Here we have an example of uses of a universal machine that are not more-or-less mutually exclusive. The example of the car-plane-boat machine does not fit the bill. It is an inappropriate analogy and does not carry weight.
If we have machines that are really intelligent, explain to me why it is acceptable to assume that they will be amenable to such behaviorial constraints as a result of programming. Inasmuch as a distinction can be drawn, an intelligent machine thinks, it doesn’t just blindly follow a program. I’m confused as to how you can have a meaningful definition of intelligence that remains constrained to specifics of programming. Isn’t it reasonable to say that intelligence is not a programming function anyway? The difference between your brain and a dog’s brain has more to do with the hardware than software. If intelligence is the product of a sufficient large, complex, and flexible processing unit, then is there any reason at all to assume that an intelligent unit can be programed as we think of it today?
p.s. The toaster was a novelty item, but that doesn’t change anything. Other robots didn’t have belief chips, the toaster is just an amusing example.