How much energy to read/write a hard drive?

To write a bit to a hard drive, the read-write head has to be supplied with one or more bursts of electrical current in order to change the state of magnetization on the surface of the hard drive’s platter. It takes some voltage to drive that current, so voltage * current * time = a certain amount of energy to write that bit.

To read a bit from the hard drive, the magnetic fields on the moving hard drive platter induce currents in the read-write head. Since there’s some electrical work being done there, the reading of a bit from the hard drive must produce some decelerative torque on the platter, with makeup energy being delivered to the platter’s drive motor from the computer’s power supply in order to maintain the platter at a constant RPM.

So how much energy is consumed in these activities? Disregarding the movements of the read/write head and windage losses of the platter and friction losses in the bearings, if I wanted to flip every bit on a 10-terabyte hard drive, how much energy would that take? Conversely, if I wanted to read every bit on a 10-tereabyte hard drive, how much energy would that take? For the latter, disregard whatever energy is required to convert those faint signals from the read-write head into full-strength signals at the drive’s output connector. In both cases (read and write), I’m just curious to get a grasp on how much energy is expended in manipulating or reading the magnetic wrinkles on the drive’s platter.

According to this:

In commercial SSD and HDD, the energy consumption for one bit is ~ 1 nJ on average. On the data center level, the effective average energy consumption per bit is ~ 0.2 mJ. When moving to higher level, the dominated energy consumption is no longer from the storage itself, but from infrastructure and I/O. SOURCE

There is a lot more detail at the link.