What is the fastest sorting algorithm

I read that Quick Sort was the fastest in the average case. I am assuming on a random set of numbers if would be the fastest on average.

Is this proven to be the fastest algorithm? In what cases would other algorithms be faster? Are all those other algorithms just curiosities, or do most of them have a use?

There are certainly some cases where other algorithms are faster than quicksort. While quicksort is n*log(n) (and with a fairly low coefficient) on average, its worst-case scenario is n^2. What’s worse, its worst-case scenario comes precisely when the input list is already sorted, which is a not-uncommon situation in the real world.

There are some slight variations on quicksort that address this. They still have a worst-case performance of n^2; they just have it for other specific orderings that are much less likely to come up in the real world. And their overall efficiency, on a random input, are slightly less than that of true quicksort, because they have a little bit of extra overhead to do that. But almost all implementations of “quicksort” used in the real world are one of these variants, because in the real world, the gain from not having to deal with that worst-case very often is worth it.

Alternately, there are also other algorithms that are n*log(n) (but with a larger coefficient) in all cases, and which work equally well on all inputs. My personal favorite is mergesort, because despite not being the most efficient, it’s one of the easiest for a mere human to understand intuitively.

Also, all of this discussion is just about comparison sorts: When you look at A and B, you decide that either A is before B, or that A is after B, with no further information. But if you do consider further information, you can get significantly more efficient. For instance, if I have a stack of papers with student names on them, if I see a name starting with Z, I’m going to immediately move it to the back of the stack: I know not only that, say, Z is after E, it’s much after E. But this only works with some known approximate distribution of names (or whatever the sorting key is): If I’m using an algorithm like that, but happen to have a list consisting entirely of names starting with Z, I’m going to waste a lot of time doing nothing but putting them all at the bottom of the stack.

Right— other algorithms are not just curiosities and may be faster than quicksort for some purposes. For example, consider sorting in parallel; then depending on your architecture some form of PSRS sample sort might be efficient. Or consider sorting a list of integers: there are special algorithms adapted for that; or sorting strings, or sorting data that is not randomly ordered, or sorting really huge amounts of data…

Quicksort is/was a fast/best computer sorting algorithm: fast/best for that particularly important computer limitation of having a very large data set with a very small memory, fast swapping, and a single processor. Wikipedia says that it is “a space-optimized version of the binary tree sort”.

The sorting algorithm Wikipedia calls “Postmans’s Sort” is a type of “Bucket Sort”, and speed is O(n). But in simple theory requires n^2 memory.

And that’s one of the one that requires more than just comparisons. It’s a straightforward mathematical proof that you can’t do better than n*log(n) for any sort of comparison sort (basically, it’s just a measure of the amount of information (in bits) in an ordered list of n items, since each comparison gives you 1 bit)

Oh, and as for other applications where other algorithms are better, even bad old insertion sort has its place, where the rate at which data comes in is a more significant limitation than the sorting itself (for instance, if you get a new data point every day, and want to continually maintain a sorted list).

Yes, I was trying to answer the OP without trying to explain that by “just comparisons” you actually mean a particular kind of binary operator.