> I see an estimate for 1000 rows in your EXPLAIN output too, so you're
experiencing the same
> although in your case the estimate of 1000 might be more accurate. The
misestimation was causing
> significant performance problems for me.
> My solution was to wrap generate_series() in a custom functi
ut in terms of
solving my problem, this seems to do the trick.
Thanks again!
On Wed, Nov 8, 2023 at 5:45 PM David G. Johnston
wrote:
> On Wed, Nov 8, 2023 at 6:26 PM Lincoln Swaine-Moore <
> lswainemo...@gmail.com> wrote:
>
>> SELECT
>>
>
in various ways to match
data's index
- starting with data as the left table and periods on the right (I'd have
to do some post-processing here, but if it was fast it would be worth it)
The only thing I can think of left to do is to run 12 separate queries--one
for each month, which is a time period that blows up the nested loop just
enough that the query can finish--and then concatenate them after the fact.
But it seems like that's just using some knowledge about the structure of
the tables that I should be able to communicate to the planner, and I'd
really love to keep it all in the database!
Thanks for any and all help and suggestions.
--
Lincoln Swaine-Moore
8 at 3:12 PM David Rowley
> wrote:
> >
> > On 9 November 2018 at 08:35, Lincoln Swaine-Moore
> > wrote:
> > > My primary question is: why is this approach only possible (for data
> too
> > > large for memory) when using NOT EXISTS, and not when using NOT IN?
&
g, I would expect that to improve
performance of the NOT IN operation, since a single pass through that data
should reveal if there are any NULL values, at which point that information
could be used to short-circuit. So I am a bit baffled.
Thanks very much for your help!
Best,
Lincoln
--
Lincoln Swaine-Moore
01 read=40 written=8
-> Bitmap Index Scan on a_parent_id_idx4
(cost=0.00..130.39 rows=8273 width=0) (actual time=54.690..54.690 rows=5220
loops=1)
Index Cond: (a_partition4.parent_id = ANY
('{42242,50388,21916,13987,28708,4136,24617,31
34226,24506,40987,27162
> )
> ORDER BY "a"."tmstmp" DESC
> LIMIT 20;
>
> May be that limit 20 is not pushed to partitions in your version ?
> Regards
> PAscal
>
>
>
>
>
> --
> Sent from: http://www.postgresql-archive.org/PostgreSQL-performance-
> f2050081.html
>
>
--
Lincoln Swaine-Moore
Hi all,
I'm having a good bit of trouble making production-ready a query I wouldn't
have thought would be too challenging.
Below is the relevant portions of my table, which is partitioned on values
of part_key from 1-5. The number of rows is on the order of 10^7, and the
number of unique "parent