Broadly speaking, is the shift to cloud computing net carbon positive or negative?

A few years ago, my company moved its client-facing web applications from a small local host (we were one of four tenants) to a cloud service operated by one of the Big Three. My management is now asking whether we can argue that this contributes to our carbon-reduction goals.

I’m doing some reading online, and … it turns out this is not a simple question to answer.

A giant data center is obviously a massive consumer of electricity — not just the servers themselves, but especially the racks and cooling systems that keep the servers from melting down. Overall, data center power usage seems to be commonly estimated at up to three percent of the global total. Individually, compared to an average house, a data center is estimated to consume anything from ten to fifty times more power on a square-meter basis. However, conceptually, it stands to reason that at least some of the expansion in this space is balanced by the elimination of the old-fashioned self-hosted corporate facility — it is called a migration, after all — so there has to be some degree of reduction incorporated into the net picture.

It also makes sense, conceptually, that a few large facilities could be more efficient consumers of electricity compared to multiple small or independent data centers. In addition to centralized generation and delivery, there’s also the notion that these large facilities could use their market leverage to demand more sustainable energy sourcing, thereby putting pressure on providers to modernize their grids.

In principle, these are sound arguments. The hard part is finding reliable evidence to support them.

There are a lot of corporate blog posts in the technology space asserting that migrating from a dedicated facility to a cloud provider is an unqualified net positive in terms of energy consumption, relying on the angles above. I’m disinclined to trust them, however. Not only are these articles inherently self-serving from a business standpoint, they use squishy, noncommittal language that sounds good but doesn’t actually say anything if you read carefully. For example, in describing how cloud centers can use more sustainable sources of power, they consistently say this is an “opportunity” with the “potential” to effect this change. None that I’ve found is willing to say that any of this has actually happened to any meaningful degree. Example. Example. Example.

Consultants catering to the corporate market are similarly vague. This page enthusiastically argues for the environmental benefits of the cloud revolution, but their position is couched evasively: “For the subset of initiatives in which the cloud can play a significant role, we calculate that each use of the cloud to power key technologies can reduce the cost of implementing a decarbonization initiative by 2 to 10 percent. On aggregate, we estimate that the total benefit of using the cloud to accelerate decarbonization could be up to 1.5 GtCO2e per year by 2050.” See that? “Can be.” “Could be up to.” Nothing about what has actually been achieved. (And of course, this is McKinsey, reigning royalty in the domain of weaselly hand-waving.)

There are, of course, contrary arguments. This page goes into great detail on the topic, noting that most of Google’s claims for progressing toward carbon neutrality are actually achieved by the purchase of carbon credits, a practice which is fraught at best. The page also includes this chart, which seems to show that while the old-fashioned data centers are indeed dying out (consistent with the “pro” arguments above), cloud centers are expanding at a faster rate, and the real picture of electricity usage is an overall increase.

However, while the source is named, there’s no link, and there’s no detailed information reproduced in the text about the methodology behind the chart. So although the graphical argument is clear, I’m hesitant to take it at face value.

The same article also spends a lot of time analyzing the differences in approach between Microsoft, Google, and Amazon, suggesting that Google’s claimed benefits are at least partially illusory (per the above) while Microsoft has made a more meaningful commitment to clean and sustainable energy sourcing in its data center network. But then it gets into some pretty tangled weeds, and it’s hard to parse a clear conclusion. This is another consulting firm, focused on selling their expertise in energy efficiency, so it’s unsurprising they wouldn’t give away the goods; they want to hook your interest (“uh oh, we’re not doing enough!”) so you’ll be willing to retain their services.

Finally, it feels to me like the available discussion, incomplete and unreliable as it is, is already obsolete, because none of it accounts for the explosion in interest in the new so-called “AI” technologies. The processing hardware required for the emerging language-modeling systems is much more energy-intensive than the previous generation of cloud computing, so instinctively I suspect that whatever gains might arguably have been achieved per the preceding paragraphs are rapidly being wiped out. But this is just my gut feeling.

So. What’s the real picture here? I’d like to keep this in FQ if possible, because I’m seeking concrete information and hard citations to the extent these are available. Can I give my management a happy reply in good conscience? Or should I just send them the McKinsey article and then hide under my desk in guilt?

One other point to add to the mix: Electricity consumption is not the same as carbon production. Since cloud data centers can be located anywhere, and the primary cost to run them is electricity, they’re generally located where electricity is cheapest. And the cheapest electricity, when you’re in a location that can use it, is usually hydro, and so cloud data centers are usually close to, and primarily powered by, hydro generators.

There’s also some nonzero amount of energy and other resources used in making computers. A single company’s computing cluster has to be sized for when their highest demand is, but if demand is uneven, will often sit idle. But a cloud provider can average out over many customers, all of whom have different high-demand times, and some of whom are willing to take time whenever it’s available, so they’re able to do the same total amount of computing with less hardware.

I would also think there’s an “economy of scale” situation at work. A room with a half dozen servers, or a closet with one server, still needs air conditioning. when you consider size of room, surface area, vs. amount of computer heat, the thermal leak has to be much higher, the smaller the data center. And usually, that server closet is in a closet or small room, where insulation between the room and the rest of the building is lacking because it was not thought of when the place was built. A purpose-built giant data center, OTOH, is likely insulated all around especially in a warmer locale. That’s not even getting into the issue of air conditioner efficiency of building coolers vs. a small unit.

Also, unless it’s a very busy smaller enterprise, the servers are likely sized far larger than normal. In my experience a few years ago, servers were bought and kept for exonomic reasons as long as possible, so they were sized quite large. Servers also have power supplies sized to handle the maximum the server can be expanded to, even if the number of cards and discs is not close to that. Most smaller enterprises saw “enclouded” were not using close to the full potential of their servers, could benefit from a cloud where the load for several enterprises shared the same server. On top of that, high end servers in cloud data centers would have the top end disk farms, thus ensuring the database disc response time was faster. Faster computing, more efficient - less cycles wasted on the server waiting for answers.

So my rough guess - unless you’re in the neighbourhood of a dozen servers or more going well over 75% during busy times, cloud computing is more efficient. Then you get into the issue of how electricity is made, and cloud centers being located closer to cleaner energy. (if only for the propaganda value)

Economy of Scale ↔ Jevons Paradox

This probably leaves economics and drifts into psychology. It’s probably impossible to answer, short of assuming that the two balance out and it’s a net-balance.

Yes and no. My example of a giant computing facility vs small closet with air conditioning and actual insualtion is probably a real scale example, if only for the square-cube law. OTOH, situations like increased computing power leading to visual interfaces and ubiquitous graphic user interfaces meaning far more computing is required, for more everyday situations. (But then, an iPhone running all day probably uses less than a vacuum tube text CRT terminal even in standby mode)

Jevon’s paradox was originally applied to coal - but failed to consider the totality, that coal power replaced far less efficient human and horse power. (But he had a point, since now a lot more consumer products came from even farther away, which simply would not have happened by horsecart.)

But the OP is comparing apples and apples - computing done locally vs. computing done at a remote data center.

It’s not for the propaganda value-- If that were it, you’d see server farms running on wind and solar, too, not just hydro. Like most corporate decisions, it’s because it’s cheapest, and if they can get some good PR from it, too, that’s just gravy.

Data center operators guarantee resiliency (as part of SLA in contracts) so, while they may prefer hydro and to be near hydro (I don’t know if that’s true - but assuming that it is), they will generally try to have multiple sources of power coming in with (hopefully) unrelated dependency chains. If one goes down, the others will be brought in to replace it.

I asume too big data centers all have (diesel) generator backup, because battery backup can only last so long. The carbon footprint is determined by how often for how long they need backup.

I suppose hydro is an obvious low-carbon source, and since it needs no fuel consumption, lower cost. That’s the experience in Canada, vs some flatter and drier US states.

In addition to AC: UPS, networking, power control. There is a lot of opportunity for economies of scale.

Some data centers run on DC to consolidate and optimize the AC-to-DC conversion, but I think it is a still somewhat rare.

Chiming late. But having been involved in a few data centres, some perspective.

I’ll echo the above. There are some economies of scale in big data centres. Typically if you use say AWS, you rent time on virtualised machines. This afford the provider with the ability maximise utilisation of servers. Something a local data centre can’t do. The ability to elastically expand is another win, otherwise a local data centre would tend to be over-provisioned.

But these are marginal gains. The compute per watt is going to be very similar.
Aircon is a lesser, but important aspect. A large facility can make use of large scale heat pump systems, and these can get a COP of up to 6:1. But even a basic heat pump on a small computer room isn’t going to be much worse than 3:1. Data centres based in cold climates can do much better, all the way to free cooling.
So the energy for cooling is 33% of input power locally and ranging from 17% of input down to 0%.
Again, not to be sneezed, but still in the margins. Assume a saving of 17% of input power.

As noted above, everything is dominated by the source of the power. If you are already set up in an area with hydro, nuclear, or dominated by other renewables, it is unlikely you can reduce your footprint going to a big data centre.
If you are in an area burning coal for most of its power, you can make a difference by using a big data centre even if the power draw is the same.

So, yeah, it isn’t clear cut. You could make your carbon footprint worse by outsourcing.

If I was writing up a report, I would concentrate on just the power generation sources. They will dominate all the other stuff.

You could add up the power draw of a local compute room and its aircon, and with some knowledge of your compute loads work out the base load with essentially no work being done, and assume that there is no base load when using a data centre (which isn’t totally true, but the calculation is still going to be a reasonable one.)
If you know the rough profile of compute needs in a local compute room, you could come up with a reasonable guess to the extra energy used just to keep it running, and assume that is carbon footprint that could be reasonably avoided. But it will be in the margins relative to the power generation question. But would usefully address the obvious questions people might have.

The other question is the carbon footprint of manufacture and disposal of the machines. It is likely in the noise as well. The difference between local and data centre machines is probably minor. But if over-provisioning locally, will be a factor. The environmental footprint of chip fabrication is pretty dire.

And to be really mean.
You might get to reduce staff. So you stop supporting the entire carbon footprint of an employee. I don’t think most people would accept that as a factor. But it likely dominates everything - if only you could disincorporate the surplus employees.

So, in summary report might be:
Power generation source dominates,
effects of local over provisioning matter

Thanks for all the comments so far.

The “economy of scale” principle is what I was getting at with “centralized generation and delivery” in the OP. I am gathering from everyone’s contributions that the potential efficiencies make sense in principle, but I also infer from the dearth of specific citations that it’s difficult or impossible to say that these efficiencies have actually manifested in practice.

Well, except for the trees that get cut down to build yet another giant data warehouse out in the sticks. :smiley:

I appreciate the discussion. It’s unfortunate that my gut-instinct perception of a highly nebulous situation seems to be accurate, but at least I wasn’t totally off base.

I suppose another factor is that a large cloud center like AWS could essentially turn off host servers to minimize power draw when load was low, reboot as demand grows, and simply shuffle your (virtual) severs onto the active hosts to ensure high usage at all times. A lot of the businesses I saw had local servers up and running drawing power but essentially unused during the evening and nightime. A cloud data center would minimize that idle power. (Not to mention CPU usage often below 50% even during the day).

I find it ironic that the PC came along to liberate us from the central mainframe nad dumb terminal systems, only to find the cloud computing, virtual desktops, and even cloud MS Office apps have returned us to that model.

So perhaps another calculation would be to see/guess how much CPU% usage over a typical day or week, then assume that the cloud center like AWS can eliminate much of the idle.

The thing with Jevons Paradox is that as you lower the overhead, usage increases. Even if the principle was working full-bore, if the value of computation was so great then having the greater access means that you’re obligated to increase consumption up to match availability.

If that’s happening. And, personally, I’d guess that the answer is yes, it is. GenAI, for example, ain’t cheap and requires well bring the resources that a startup would ever want to manage - and yet there’s a hoard of startup AI companies. Likewise with crypto.

Is that really true though, in the current era of interconnected power grids? I mean, if someone sets up a data center in Boulder City, NV right by the Hoover Dam, are they going to get cheaper electricity than if they’re set up somewhere else in Nevada serviced by the same electrical utility?

Here in Texas, the crypto mining is set up all over, even though most of the wind generation is in the Panhandle and west Texas.

It can be. But with a huge dose of it depends. Depends upon your grid’s retail model, and what the power generating companies are prepared to enter into. Certainly here I can buy my power from a reseller that guarantees what the source of the power is. Clearly what is happening is that they buy power from a suitable generating company wholesale, and retail that power allocation to me. The dancing pixies don’t care who the joules come from, or where they go. But there is proper accounting backing up the commerce.

OTOH, you may be beholden to one power company, with whatever generating capacity they have and one size fits all retail market.

Where I am there is massively interconnected grid spanning thousands of kilometres each direction. Many power generation companies, many commercial, many state owned, and a grid that is separately owned and run (at least for a lot of the system.) Then a retail market where you can buy from traditional generating companies, or from what are essentially retail brokers. There are a few that will sell you very green power, but there is a cost penalty - especially when the wind doesn’t blow and sun isn’t shining. I am still waiting for company to sprout up that advertises cheap dirty power, guaranteeing it will be from burning the worst brown coal. I’m sure there would be a market, at least with a couple of my more conservative friends.

If nothing else, there are inefficiencies and other costs in transmitting power over long distances, so it’s still best to put as much of the power consumption close to the good sources as possible. To be fair, there are also costs in transmitting data over long distances, but those are generally going to be much lower than the costs in transmitting power, as long as you’re not doing something silly like putting your display drivers in the cloud (which gets proposed occasionally and even implemented at a small scale, but never takes off).

I suppose that would depend on where you are.

Here in Texas, in a truly “stranger than fiction” turn of events, we’re the national leader in total renewable power generation, not because anyone in the state government is particularly environmentally friendly, but rather because wind and solar is extremely cheap in certain very windy and/or sunny parts of the state, and it’s lucrative for land owners, and good for wholesalers and consumers. Sort of the right thing for all the wrong reasons.

I’d think the only real draw for something like that would be for people to put a placard up to show off their MAGA bona-fides that they get 100% lignite energy or something like that, but I suspect that cheap would win out in the long run.

[Moderating]
And that’s enough of that topic (deliberately dirty energy) in this thread.

As I understand, it’s not productive to send power much beyond 1,000 miles. Too much transmission loss. So Quebec, Ontario, and Manitoba can sell their surplus hydro power to northern states (maybe from BC too?) but not down to, say, the Carolinas. Of course, displacement works too, where power from Quebec means that. let’s say, New York could sell its own power to Virginia. But if both NY and VA are using the same coal and natural gas, where’s the savings for VA not to build its own plants? (IIRC they can also get hydro from Tennessee…)

But data transmission is extremely cheap and effficient by contrast, so a data center across the continent is not too ineffcient.

This turned up recently. Somewhat on the same trajectory. OpenAI signing up for fusion power to power the best big thing in AI processing. I will refrain from providing an option.