Hi Tom, Hi Rob

Thanks for this clear and complete explanation. My question was unclear since I 
didn't even consider the results could be identical and it was about the plans. 
I had misunderstood what J.Lewis had written since he probably meant some RDBMS 
always do a BETWEEN SYMETRIC. Our application currently has double 
compatibility with Oracle and PostgreSQL, PostgreSQL only after 2021, and I 
hope we won't be compatible with a RDBMS that would not respect SQL standard on 
this aspect.

>But if you have a workload where it's really worth spending
>planner cycles looking for self-contradictory queries, you can
>turn it on.

It was theoretical but it is a DSS tool and some queries can be dynamically 
built by the end users. Perhaps it really happens on some cases since I don't 
know if we always check criterias not to obtain self-contradictory queries. 
Since it's not OLTP our execution times are always much more important than our 
planning times anyway.

There are other places it could be more interesting to spend time for better 
performance. Debian 10/PostgreSQL 11 is our initial PostgreSQL platform. It's 
very efficient but I have to prepare Debian 11/PostgresQL 13. My first goal was 
to avoid performance regressions but I now want to always fully use planning 
capabilites of PostgreSQL. We currently have to completely deactive merge joins 
for some workload, nested loops for some other workload. It's OK but it's not 
optimal.
My current goal is to always activate (almost) everything with Debian 
11/PostgreSQL 13 and everything with Debian 12/PostgreSQL 14+.
I will try to increase default_statistics_target it could be worth the price. I 
will also try to activate enable_partitionwise_aggregate and 
enable_partitionwise_join since we use partitioning by list of hospitals and 
subpartitioning by range of times. Replacing our slow Oracle "union all" views 
by PostgreSQL partitioned tables to deal with group of hospitals has still to 
be completed.

Best regards,

Phil


________________________________
De : Tom Lane <t...@sss.pgh.pa.us>
Envoyé : vendredi 15 janvier 2021 03:12
À : Phil Florent <philflor...@hotmail.com>
Cc : pgsql-gene...@postgresql.org <pgsql-gene...@postgresql.org>
Objet : Re: Strange (and good) side effect of partitioning ?

I wrote:
> There's no specific mechanism in Postgres that would cause "X between 20
> and 10" to be reduced to constant-false

Wait, I take that back.  There is a mechanism that can conclude that
"X >= 20" and "X <= 10" are contradictory, but it's not applied by
default.  Observe:

regression=# set constraint_exclusion = default;
SET
regression=# explain select * from tenk1 where unique1 between 20 and 10;
                                 QUERY PLAN
-----------------------------------------------------------------------------
 Index Scan using tenk1_unique1 on tenk1  (cost=0.29..8.30 rows=1 width=244)
   Index Cond: ((unique1 >= 20) AND (unique1 <= 10))
(2 rows)

regression=# set constraint_exclusion = on;
SET
regression=# explain select * from tenk1 where unique1 between 20 and 10;
                QUERY PLAN
------------------------------------------
 Result  (cost=0.00..0.00 rows=0 width=0)
   One-Time Filter: false
(2 rows)

The default value of constraint_exclusion is "partition", which means
(you guessed it) that it's applied only to potential partitioning
constraints.  This is a heuristic based on the typical payoff of
excluding whole partitions versus skipping an empty index scan.
But if you have a workload where it's really worth spending
planner cycles looking for self-contradictory queries, you can
turn it on.

                        regards, tom lane

Reply via email to