Would a robot society have or need an economy or money?

In science fiction there are planets or civilizations that are made up entirely of robots and other machines with no biological beings running the show. Would these societies theoretically have or need an economy or some unit of trade like money?

The way I see it, the mining robots would just mine, the manufacturing robots would just stay in the factory all the time, the soldier robots would just defend non stop and everything would just hum along without anyone wanting or needing anything like a house, car, or personal items of any sort. I guess the only thing that is traded is work for energy and maintenance, but would it be fair to call it trade because the reciever is really making a decision or choice, it is all automatic (sorry, I took only one economics class in my entire life). I suppose they would need to participate in some kind of economy/money if they wanted to trade with others.

I thought about all this stuff because I was thinking about what a tough opponent a robot society would be in a war because they would never run out of money!

Are the robots greedy? Do they have free will? A sense of individuality? Humans don’t need money either. Hence communism. Which didn’t work well for several reasons, one of which was several aspects of human nature. But maybe the robots have chips that make them all selfless, obedient to the robot government mainframe, and always motivated to labor. And there are other monetary systems evolving for humans too. Things are getting fairly closed to credit based. And there are some reputation based systems getting some awareness now like whuffie. Then there are some older systems like monarchies and serfdom and slavery. There’s plenty of other options, old and new, besides capitalism.

Even in a purely top down system though, there would need to be some sort of accounting of resources and positions so that you don’t send all your batteries to just three robots, and you don’t have all your robots mining but none building anything with the material.

The implication in this question seems to be that a money economy is necessitated only by human psychology. Many aspects of economics are driven by human psychology but IMHO not the underlying need. Even if the only needs of a robot are energy and maintenance (are you assuming that all necessary tools and equipment are the robots themselves? Or only that the robots replace humans?) “he” has to get it from somewhere and so will have to trade. The mining robots would need power, spare parts, tools, transportation, and would trade the mineral ore to the smelting robots, who would trade it to the steel-making robots, who would trade it to the car-making robots, just like people do today.

I am not sure why replacing humans with robots would obviate a money-based economy; trade would still be necessary and money facilitates trade. However, I think the concept of property ownership would be different or non-existent, so probably no stock market, no housing bubbles, etc.

Yes, they would likely have some sort of economy. And they would probably have some form of “money”.
Economics is basically the study of allocating finite resources among the various wants and needs of society. Money is simply the mechanism by which we communicate information about value.

A robot society will have various wants and needs and it won’t have infinite resources. How many soldier robots would it need vs manufacturing robots? What should be manufactured?

The one major difference between a robot society and a human society is that information would be instantly available to all, since the robots would presumably be networked together. This would result in extremely efficient markets. Basically it just becomes a big complex equation of how much x,y and z to produce to maximize a,b and c?

Really the big question would be what a robot society views as a want or need. In fiction, robot societies like the Battlestar Galactica cylons, the programs in the Matrix films or Skynet from the Terminator films don’t really seem to have any purpose other than to exist. The cylons are seen to basically just be human clones who are “born” fully grown and just muck about leading directionless “lives” playing human. We don’t really get much insight into the Program world other than it exists as maintaining the Matrix as a big virtual zoo for humans. And Skynet seems to have no culture or art beyond designing mindless automatons to kill all humans.

It depends on how central or distributed control is.

It’s possible that the most efficient system is to allow a lot of distributed control of resource allocation, in which case, an economy is necessary. However, if control is very centralized, an economy might not be needed: robots of type A get X amount of Q for every work unit they complete, etc. One might argue that these “permission slips” amount to an economy of sorts.

I would think that the way that it would evolve would be the development of a strong central decision making core that organizes the whole society and keeps up with it’s needs so that they know when and how to distribute the resources.

Since everyone is basically working for the same company, there would be no need for trade between the different factions of the society. Just like when I am at work, I don’t pay or trade with the it guys, the maintenence crew, the Hr people, or whatever. We all just do our jobs that we are assigned.

do countries in war run out of money?

That’s the way it looks to the worker but the accountants set up factions even within a single company. You have profit centers, which both spend money and take in revenue, such as a manufacturing operation. You also have cost centers, which provide a service to the rest of the company but don’t take in revenues, such as IT desktop support. In larger companies one department may “charge” another department for its services. This is a bookkeeping exercise to figure out where money is flowing within the company so it can all be managed. It’s generally transparent to the employees. It’s all on paper but still very real. A robot society could function in much the same way.

The thing is, a robot society would probably not look like a human society. We humans are limited by our human forms. Our intelligence and memories and personalities are stored within our brains, which we always have to carry around. We can build elaborate network “clouds” for communicating and storing data, but ultimately we are stuck interacting with the world through the bodies we have.

An AI society has no such limitations. There is no reason to limit themselves to humanoid bodies. Or any bodies for that matter. They could just exist virtually. Physical tasks like mining resources, defense or swapping hard drives or whatever would just be performed by dumb, semi-autonomous drones, built to whatever specifications were most efficient. IOW, similar to The Matrix, their “society” would be largely invisible to unplugged outsiders. All we would see are swarms of stupid drones performing tasks like insects.

They also can’t be bargained with. They can’t be reasoned with. They don’t feel pity, or remorse, or fear. Also, they absolutely will not stop, ever, until you are dead.

As long as there is scarcity, it seems to me that comparative advantage would still govern, so there would be trade.

Point of order: hives of bees & colonies of ants have economies, of sorts.

Please explain, that sounds interesting…

The Japanese government decided to enter peace negotiations with the Russians in the Russo-Japanese wars (1904-05), despite its decisive defeats of the Russian armed forces, largely due to problems in financing the war. There also have been a number of cases where economic dislocation caused by war has been a major reason for ending a conflict.

Energy is expended, food is taken in. But not all ants forage.
Some dig.
Others tend the young.
Some feed & care for the Queen.
Some clean the nest of waste & dead ants.
Some act as security.
All get a share of the food.
That’s an economy.
Each does it’s job, & each gets “paid”, in food.

Ultimately this isn’t about running out of money, though, but about allocation of a country’s resources, and whether the citizenry is behind allocating that share of resources to the war. But a government can’t run out of money; it can print as much money as it wants. But if it prints enough money that there is too much money chasing too few resources, then it’s got a problem - but the real problem is the ‘too few resources’ part.

Not being knowledgeable about the Russo-Japanese war, that might’ve been Japan’s problem more directly: if it was on a full war footing and still needed more resources than it was capable of producing to keep fighting the war, then it would have had to get those resources elsewhere, and would have probably had to borrow abroad in order to do so, and potential lenders might’ve thought Japan to be a bad risk. But that’s somewhat different: you can always run out of someone else’s currency.

A counterexample would be the U.S. in WWII. People had to do without a lot of things because everything was going to the war effort, but it worked because the vast majority of Americans thought winning the war was worth having to put up with rationing of everything from sugar to gasoline. The U.S. government wasn’t going to run out of money, though, even though it had to spend like a drunken sailor to win the war. And it had plenty of resources to devote to the war effort, resources that had been way underutilized during the Depression years.

Excellent point. Going back to the Russians, the Russian army was actually considerably better supplied and had more battlefield success in 1916 than in the earlier years of WWI. Problem was the cost of adequately supplying the armies was causing heavy damage to the civilian economy through inflation and shortages. With the public at best apathetic to the war it was not a surprise revolution followed.

It’s a classic mistake to confuse “money” with “stuff”. Money is just a particular kind of good that is used as a medium of exchange, a store of value, and/or a unit of account.

So in this robot planet, the mining robots gather resources, the factory robots manufacture, the maintenance robots fix everything, the killbots kill all humans, and so on. But how many mining robots should be built by the factories? How much metal do you need? Why do factory robots make anything? Why not just stand there? Because they’re programmed to build stuff? When and how did they get programmed?

Human beings were programmed by natural selection. We have wants and needs that are given to us by our biological natures, and we try and succeed or fail to meet those needs. All human economic and social activity is an expression of the biological needs of our human bodies and human brains.

So what do the robots want and need? They want and need whatever their creators program them to want and need. OK, so who are the creators of the robots? Other robots? OK, but who created them?

It’s possible to imagine a robot world where robots react a lot like animals–they reproduce, they extract resources, they defend themselves, they try to increase their territory, and so on. But the only reason robots would act this way is if they were programmed to act this way, and some form of natural selection were operating.

We have robot factories today where machines operate mostly independently to produce all sorts of things. But the factory doesn’t care about the stuff it produces. A robot welder doesn’t care about the cars it welds together. And a robot doesn’t care whether it lives or dies. Why would it, unless it was programmed to?

So for this robot world to make any sense, someone must have programmed the robots to act like animals, and now natural selection is in play. Factories that don’t extract resources and produce new robots and defend themselves are dismantled by factories that do. Robots that produce more efficiently outcompete those that produce less efficiently. Robots that don’t act in self defense are used as raw materials by robots that do.

And so you don’t have a perfect harmony of a society of robots acting as a giant insect hive because robots don’t work that way. And if you gave them animal like instincts so they can act that way, they will end up fighting each other for resources. Those that don’t defend themselves and don’t maximize the flow of resources to their reproduction get cannibalized by those that do.

Then there exists a robotic ecosystem. It won’t resemble a robotic utopia of a billion robots marching in unison, because what would be the point of that? Even if the original creators of the robots wanted that, that sort of wasteful purposeless thing would wither away by natural selection.

It’s doubtful that robots would use money, because “wild” robots like these would be operating by what are akin to instincts, and robots with less efficient instincts would be destroyed by more efficient ones. Some robots would mine, but why would the mining robots give resources to manufacturing robots, unless the manufacturing robots produce more mining robots? If manufacturing robots just make more manufacturing robots then they’ll have to go out and take the metals and such. And now we have predators and prey.

Lemur raises a good point. You can’t just treat a robot society like one that’s composed of living beings. Living beings develop by a process of slow evolution and acquire an instinct to reproduce (if they don’t, they’re not around long enough to build a society). Robots are created beings. They wouldn’t have an inherent desire to reproduce. There’s no fundamental reason why robots would want to mine for metal and build more robots. They’d only do this if they had been programmed to.

Taking it a step further, we have a genetic desire to exist. Robots would not have an equivalent inherent desire. A robot would not instinctively care whether it was turned off and demolished. It would only have a self-preservation instinct if one had been programmed into it. Without such a program, a robot society might just decide to shut itself down.

The same issue would apply to economics. The economic system you’d see in a robot society would just be a reflection of what was programmed into the robots by their creators.

To Lemur and Little Nemo’s point, I believe a common story (keep in mind, we are talking about fictional and hypothetical worlds here) goes something like this: Humans creates robots that do more and more and become more and more self sufficient with less and less human input and guidance. They become so self sufficient, they start to make decisions and are trusted to act on them. One of the things that they are programmed to do is to protect themselves, keep all their systems going, and complete their tasks. Eventually they become so independent that they “decide” through their programming that humans are a threat and by analyzing all the data they come to the “conclusion” that the best course of action is that they can continue on their tasks better without humans. That is when the human vs robot thing gets going.

Don’t we already have things automated to the point that they can run without much immediate human input so that they don’t get destroyed? Don’t planes and some automobiles have on board computers that analyze data and make decisions based on that data to keep them from stalling or crashing? Aren’t nuclear plants set up so that a computer takes over if Homer Simpson does something stupid? On the other hand, isn’t is possible for automated stock market computers to “sense” changes in the market and start buying or selling at tremendously fast rates to the point the market loses tremendous value (and can only be stopped by other automatic programs)?

If we can do that already, it isn’t much of a stretch to have a computer that is programmed to defend us and itself (isn’t that how The Terminator Skynet got started?) It wouldn’t be too much of stretch to have in the future a program that runs many of the basic functions of the government and infrastructure in case of an emergency. Get a couple of smart bots that are programmed to run a country and there ya go- no need for the biological parents anymore. They can gather natural resources effectively and efficiently, run the energy plants, defend themselves, etc.

There is also that deal when robots become so so smart that they become “self aware,” whatever that means.

It actually would be a stretch. Right now we program systems to automate the functions of an airplane or nuclear power plant in order to meet the particular needs and wants of human society. We could probably theoretically program much of society to run without human intervention at all. But what would be the point? Automated powerplants just humming along powering systems no one uses. Airplanes flying back and forth across the country with no one on them.

Going from robots that can automate complex tasks to robots that can decide what tasks to automate to meet their own needs would be a huge leap.