On Mon, 2021-12-06 at 19:22 +0100, Pavel Stehule wrote:

> po 6. 12. 2021 v 18:21 odesílatel Francisco Olarte <fola...@peoplecall.com> 
> napsal:
> > On Mon, 6 Dec 2021 at 18:03, Alan Hodgson <ahodg...@lists.simkin.ca> wrote:
> > > # explain SELECT "shipment_import_records".* FROM 
> > > "shipment_import_records" WHERE 
> > > "shipment_import_records"."shipment_import_id" = 5090609 ORDER BY 
> > > "shipment_import_records"."id" ASC LIMIT 1;
> > >                                                          QUERY PLAN
> > > -----------------------------------------------------------------------------------------------------------------------------
> > > Limit  (cost=0.44..873.35 rows=1 width=243)
> > >    ->  Index Scan using shipment_import_records_pkey on 
> > > shipment_import_records  (cost=0.44..5122227.70 rows=5868 width=243)
> > >          Filter: (shipment_import_id = 5090609)
> > > .. which takes minutes.
> > >
> > > Just wondering if there's a knob I can turn to make these more likely to 
> > > work without constantly implementing workarounds?
> > 
> > You may try a composite index.
>
> +1 These issues can be solved by composite indexes. The low limit clause 
> deforms costs and when the data are not really random, then index scan can be 
> too long.

An ugly alternative is to use "ORDER BY id + 0", which prevents PostgreSQL
from using the index.

Yours,
Laurenz Albe
-- 
Cybertec | https://www.cybertec-postgresql.com



Reply via email to