On Fri, Aug 26, 2022 at 9:06 PM Benjamin Coutu <ben.co...@zeyos.com> wrote:
>
> Another idea could be to run a binary insertion sort and use a much higher 
> threshold. This could significantly cut down on comparisons (especially with 
> presorted runs, which are quite common in real workloads).

Comparisons that must go to the full tuple are expensive enough that
this idea might have merit in some cases, but that would be a research
project.

> If full binary search turned out to be an issue regarding cache locality, we 
> could do it in smaller chunks,

The main issue with binary search is poor branch prediction. Also, if
large chunks are bad for cache locality, isn't that a strike against
using a "much higher threshold"?

> With less comparisons we should start keeping track of swaps and use that as 
> an efficient way to determine presortedness. We could change the insertion 
> sort threshold to a swap threshold and do insertion sort until we hit the 
> swap threshold. I suspect that would make the current presorted check 
> obsolete (as binary insertion sort without or even with a few swaps should be 
> faster than the current presorted-check).

The thread you linked to discusses partial insertion sort as a
replacement for the pre-sorted check, along with benchmark results and
graphs IIRC. I think it's possibly worth doing, but needs more
investigation to make sure the (few) regressions I saw either: 1. were
just noise or 2. can be ameliorated. As I said in the dual pivot
thread, this would be great for dual pivot since we could reuse
partial insertion sort for choosing the pivots, reducing binary space.

-- 
John Naylor
EDB: http://www.enterprisedb.com


Reply via email to