Do any kind of web search, query etc and they usually dole out the results 10 at a time (or 20 or 50 or whatever), to lower the load on the server. In cases where there’s just one or two results remaining I think those should be shown too. For example, if there are 52 results, the first four pages should have 10 results and the last one should have all the remaining 12. In general, if the number of remaining results is < 30% of the results per page, just show them.
I’ve never seen this on any system and I’m surprised no one’s tried it. Wouldn’t it reduce the number of server requests and allow the server to support more users?
If numbers of results follow Benford’s law it may be more likely than random chance that there will be a few more than a round number of results. Speculating.
Thanks to Google, I’m well-acquainted with “Results 21-27 of 30; the remaining results are being hidden from you because they’re just duplicates, but if you want to see the duplicates there are 14 of them, which also doesn’t add up to 30”
Then you have to include a check for those cases for every search. So your calculation of the load on the servers has to take that into consideration. You probably also have a database server and a webserver (or many of each), so which of those is taking the additional load? And how often do people find what they’re looking for on that penultimate page and complete their search without loading the ultimate page? Is it enough to make loading two additional records unnecessary?
In the end though I think this is more about laziness and avoiding complexity. That is, if this doesn’t already exist. You probably wouldn’t notice unless you make a habit of paying close intention to the number of results on the last page and not, as I’m assuming, just notice it when it’s an annoyingly low number.
I have seen it. One I recall details about simply received the additional rows, hid them on the page, and then exposed them when you clicked to get more. And then there applications where more rows of data than the page size are sent from the server initially and the client does the pagination, with data being requested or pushed at whatever pace is practical. Given the low resource requirements of producing the page compared to finding the data in the first place, and then toss in the variety of ways to handle that data on the server side, once you get to the point where the accumulation of additional server requests could make a difference you are already far past the point where your server was overloaded.
Isn’t it a heck of a lot easier to program “do x many at a time until end” rather than “do x many at a time but if the last page has a few more than x show x + y”? Especially since, as AHunter3 indicated, the last page might not be the y you expect from the total given on the first page?