I’ll try to address your question in more detail, RaftPeople. Let’s first consider reads; then writes.
In a typical Unix OS circa 1980-1990, when a process requests a read of block N, it will then block (sleep) until that read completes. Since the inter-sector delay will not generally be long enough to allow a round-trip through kernel, application and back to disk with a N+1 request based on N, it sounds like you are imagining some interleave, e.g. 1, x, 2, x, 3, x, 4, x as mentioned earlier. In fact, however, such interleaving is undesired — you run at only half-speed in the sequential case. Instead applications will read in large chunks — e.g. {N,N+1,N+2,N+3} all in one chunk; this will be supplemented by disk controller pre-fetching and, optionally, kernel-driven speculative reads. In any case you either meet the rotational deadline or you don’t (you ‘slip a rev’); slippage is the same in either case (very slightly better for Ben, in fact).
But this is largely beside the point. When the disk was on-track for a sufficient period the future data you speak of {N+1, N+2, …} will have already been pre-fetched with Ben’s N+2, N+1, N, N-1 ordering.
Random reads are the same in either case.
So much for reads; let’s talk about writes.
Again, the application written for high performance may issue its writes with large quantum chunks, e.g. {N,N+1,N+2,…,N+7} for a 64k write. (Just an example. Neither the actual block-size, nor the ordering granularity is essential to the method.) The disk controller will lay down that write data in the order {N+7, N+6, N+5, …, N}.
But suppose that the application does issue writes in smaller granules.
In the Unix OS, writes can be asynchronous. Setting aside the possibility of a permanent disk I/O error in the future (and anyway write failures are often unnoticed until read-back), the application can be rewoken when kernel (or controller) has committed to the write. Disk queues are often full of asynchronous writes under the Unix Operating System, and it is routine to re-order those requests to maximize disk performance. Thus performing the writes {N}, {N+1}, {N+2}, {N+3} in the opposite order of request arrival will only be a new variation on the re-ordering that disk firmware (at whatever level) does even without Ben’s method.