To expand on what was said above.
Let’s imagine a disk where the innermost track has a 1" dia., and the outermost track is 5" dia. The other ring has 5x the magnetic material, and could potentially record 5x the data. Therefore, if the cost/complexity of the control electronics permit, we should ideally put 5x the number of constant-size sectors in the outermost ring.
Reading multiple sectors on the same track is faster than switching tracks, because the disk is spinning at a constant high speed–7200-10,000 RPM fpr today’s inexpensive IDE drives, or roughly 6-8.3 msec to read the entire track (ignoring logical skewing, which I’m not even sure today’s drives even do anymore) However, switching tracks requires moving the comparatively large physical arm, and then aligning it exactly with the new track through feedback. It just so happens that the time to switch to an adjacent track [track seek time] is typically comparable to the time required to read all the sectors on one track. Lord help you if you have to jump 10 tracks becuase your file is fragmented.
As a result, the inside tracks are “slower” (less data per revolution, more switching to other tracks). Since every [standard] computer has at least one boot drive, it makes sense to put the boot/OS information (which you only read once) in these slower sectors, and use faster [more data per revolution] outer sectors for normal operations. Which would you prefer: a computer that took an extra 100 msec to boot, or a computer that might take an extra 100 msec when you’re in the middle of using it?
The difference can be substantial. Reading one Outermost track (again ignoring logical skewing) on a modern drive takes less than half the time of reading several innermost track to get the same amount of data (MUCH less than half, actually, because I’ve left out other major delays in track-to-track reading, like re-synching yourself with the new track, so you know which sector you’re reading – after all, the platter is still spinning as you’re moving the arm) We don’t notice the difference directly, because a) it still only adds up to a fraction of a second; b) drives are really slow compared to the rest of the computer, so any drive access is like molasses; and c) our operating systems, IDE, and applications often make extensive use of caching (“semi-autonomously reading ahead just in case we need the data”) rather than “reading on specific demand only”. In the aggregate, though, it’s quite noticeable, which is why defragmenting a disk (or reinstalling a key application) can cause a speed-up even if “nothing is wrong”.
There used to be “disk optimizing” utilities that let you choose where to put certain files (inner, middle or outer) but I haven’t seen those in years. They may not be considered “worth the time/effort” with today’s much faster, much bigger drives – or perhaps few users would make better choices than the few basic rules embedded in the OS.