Okay,
I have some converted code that uses this syntax.
For 20 Million rows it was taking 15-20 minutes! (versus 3 minutes) on
live data.
See here: https://explain.depesz.com/s/VQFJ [There are 2 optimizations,
removing the ORDER BY NULL, and just using a sequence]
(The above is a live dat
Kirk Wolak writes:
> I have some converted code that uses this syntax.
Seems kinda dumb, but ...
> The solution is to remove the ORDER BY NULL. [since that is not
> sortable, should it be ignored?]
> This does NOT SHOW UP with 1 million rows.
I don't see it at all. Comparing your two te
On Mon, 20 Feb 2023 at 10:18, Tom Lane wrote:
> I don't see it at all. Comparing your two test queries on released
> branches, I see maybe 2x penalty for the ORDER BY NULL, not 30x.
> (In HEAD there's only about 13% penalty.) I wonder what PG version
> you are testing.
I suspect ed1a88dda would
David Rowley writes:
> I suspect ed1a88dda would be what made this faster in master. We'll
> check for peer rows to check "NULL IS NOT DISTINCT FROM NULL" prior to
> that change with the ORDER BY NULL query.
Mmm, yeah, probably so: "order by null rows between unbounded preceding
and current row"
po 20. 2. 2023 v 0:26 odesÃlatel David Rowley napsal:
> On Mon, 20 Feb 2023 at 10:18, Tom Lane wrote:
> > I don't see it at all. Comparing your two test queries on released
> > branches, I see maybe 2x penalty for the ORDER BY NULL, not 30x.
> > (In HEAD there's only about 13% penalty.) I wond
>
> >> Can you run amcheck's bt_index_check() routine against some of the
> >> indexes you've shown? There is perhaps some chance that index
> >> corruption exists and causes VACUUM to take a very long time to delete
> >> index pages. This is pretty much a wild guess, though.
Unfortunately I can'