Is there a Carnot engine equivalent for computing?

So for engines (gasoline, jet planes, gas turbines, coal power plants, etc etc) there is the simple Carnot engine efficiency which is the limit of efficiency.

For solar cells, there is the Shockley–Queisser limit.

Since engines and computers both generate entropy, they should both have limits on their efficiency (?)

Is there any such limit for computing? either traditional or Quantum ?

I am not sure how the limit would be defined, but it will involve entropy in one way or the other.

The flip-flop? The half-adder? The AND gate?

There is the concept of Reversible Computing that doesn’t fall prey to Landauer’s principle, so there is no mandatory minimum increase in entropy implied just by the computation itself. This is kind of analogous to the theoretical properties caused by Carnot cycle’s reversibility.

That’s odd, I didn’t even notice that it erased my commentary. I didn’t mean to just dump a link and run.

I had mentioned reversible computing in that…

Anyway, reversible computing is not useful computing, at least I don’t see how it can be.

Reversible computing is no less useful than ordinary computing. You can turn any computation into a reversible computation just by adding more outputs containing information required to reverse the calculation (e.g. you can replace a non-reversible XOR gate with a reversible controlled not gate, which has two outputs, only one of which you care about).

(In particular, quantum computing must use reversible computations because the rules of quantum mechanics are reversible.)

In fact there is no lower limit to the amount of energy required for a computation; it just goes slower and slower. One implication of this is that since the amount of free energy in the universe can never actually hit 0, computation–and therefore life in some form will always be possible. It will just be very slow (by our standards, but not by theirs).

In fact I have speculated that there was so much free energy in the first second after the big bang that entire ecosystems could have risen and fallen in that time and we would look like entropy death to them

Thank you for the replies. I am trying to establish something that the layman like me can understand.

Pick any GPU in commercial production today, for baseline, and what will the most thermodynamically efficient Comparable system look like in terms of efficiency ? Will it be 50% better or 200% better ?

For example: “ The A13 SoC has a new GPU that keeps the 4-core design introduced with A12. But is now 20% faster and a whopping 40% more efficient when delivering the same performance as A12’s GPU.” from Apple iPhone 11 review: Performance and benchmarks

So if suppose we had the thermodynamically best GPU; lets call it the T_GPU, how much more efficient % wise it will be than the A12 GPU ? (The A13 is 40% more efficient )

It’s hard to find precise numbers, but you might be interested in Koomey’s Law, which is essentially Moore’s Law for computational efficiency. A simplistic extrapolation predicts that conventional chips will reach the Landauer limit by 2048. However, since Koomey’s Law says that efficiency doubles every 1-2 years, that means we’re a long way from the Landauer limit right now.

I think you may be able to see, from the replies, that the comparison you want to do is not going to be useful to the layman, mainly because the percentages are going to be such huge numbers. So, to make up an example, you are going to be able to say that GPU A is 2,000,000,000% of theoretical efficiency and GPU B is 3,500,000,000% of theoretical efficiency (an in truth, there are going to be so many zeros, you’re going to be forced to use scientific notation, which will just confuse most people).

Even if you can educate the lay population on scientific notation, you’re going to be faced with the plain fact that when it comes to comparing the compute efficiency of GPUs, the answer is always going to start with “It depends…”

Power efficiency of any particular computer is going to depend on the tasks being run. GPUs can be very inefficient at a lot of things CPUs are efficient at (which is why your home computer isn’t just GPU processing).

There are benchmark suites, such as LINPACK, which measures efficiency for floating point operations and is the standard applied to “the fastest computer in the world”, which give a rough measurement, but benchmarks don’t fully match real-world use, so actual performance in use can be drastically better or worse. There are benchmark suites aimed at GPU performance. They suffer from the same problem as LINPACK. GPU architectures will make families better or worse at certain tasks than other GPU families. So the benchmark may favor one over the other, but when contemplating which GPU to use, the answer will still be “It depends, what are you going to use it for?”

And finally, to put a last nail in the coffin, GPU “efficiency” will have a big dependence on the overall system it is embedded in. The overall OS, interconnect, storage, etc. will further affect what users experience as far as power efficiency.

Huge numbers are fine. An order of magnitude expressed in exponent notation is fine too. Do you have such a number ?

Is this correct?

Computation is based on physical systems and pretty much anything can represent some bits and a computation. Some of those systems could be very fast and very low energy, but they may be too difficult for us to manufacture today.

Basically speed and energy variables/limits for computations are different for different physical systems.

When you say comparable, you mean that externally/functionally it can perform the same set of computations, but internally it can be constructed in a completely different manner?

Is it required to use silicon?

If silicon, are we allowed to spend enough money to make the chip 100x larger (to optimize various computations)?

A key factor is that the market would never support a company going down a valid tangent for efficiency due to resulting cost.

I don’t know how the most thermodynamically efficient computer will be made or what it will be made of.

I am asking if there is a limit to the efficiency achievable, thermodynamically, to any commercially Available GPU ? If there is such a limit, where does the current efficiency stand percentage wise to such a limit ?

Looks like you are saying, that such a limit does not exist or exists only if you define manufacturing and performance methods. Am I understanding this correctly ?

Correct.

For example: memristors can be used to create circuits that avoid the energy required to move data between memory and logic circuits. That’s a significant savings, but research is still in process.

The problem with “commercially available” is that you are just talking about what is available today. Every new iterative advancement changes the the way the overall entire problem is being solved and typically increases efficiency. Having said that, if you restrict it to silicon and current style of transistors then it’s probably easier to show a physical limit.

This is a theorem based on purely thermodynamic considerations. Also these computations are required to never erase anything (or be reversible). Since both speed and efficiency are increasing, we clearly are nowhere near the theoretical limits.

Agreed and understood. How far are we from the theoretical limits; is what I’m trying to understand. Is the theoretical limit, order of magnitude wise, 10 or 100 or 10^10 when compared to the current efficiency ?

Ok, I was wondering if that is what you meant.

I can’t provide the answer, but I can tell you it’s not based on our current style of computing (e.g. silicon transistors as they look today).