On 10/18/24 07:54, Andy Fan wrote:
Nikita Malakhov <huku...@gmail.com> writes:
The effect is easily seen in one of standard PG tests:
"""
I think that things might work out better if we redefined the startup
cost as "estimated cost to retrieve the first tuple", rather than its
current very-squishy definition as "cost to initialize the scan".
"""
The above statement makes me confused. If we take the startup cost as
cost to retrieve cost for the first tuple, we can do the below quick hack,
Promising way to go. Of course even in that case IndexScan usually gives
way to SeqScan (because of the additional heap fetch). And only
IndexOnlyScan may overcome it. Moreover, SeqScan first tuple cost is
contradictory issue - who knows, how much tuples it will filter before
the first tuple will be produced?
Looks we still have some other stuff to do, but we have seen the desired
plan has a closer cost to estimated best plan than before.
But our patch is about some different stuff: adding one more Append
strategy (and, as I realised recently, one promising MergeAppend too) we
give a chance upper fraction-friendly node to decide which plan is
better. Right now, AFAIK, only LIMIT node can profit from that (maybe
IncrementalSort if we include MergeAppend). But it may open a door to
improve other nodes too.
--
regards, Andrei Lepikhov