e logs to search for
cancellations. Do you happen to know what that would look like if I'm
grep-ing for it? And do you have any other guesses about possible
explanations?
Thanks again for your help.
Lincoln
On Sun, Feb 23, 2025 at 6:09 PM Jeff Janes wrote:
> On Sun, Feb 23, 2025 at
make a job analyze these
tables on a regular cadence, though I guess that feels like I’d be stepping
on autovacuum’s toes.
Would be happy to read over similar reports if you could steer me toward
them.
Thanks again for your help.
Lincoln Swaine-Moore
On Sat, Feb 22, 2025 at 00:37 Andrei Lepikhov
this has something to do with the fact that these tables don't
accumulate dead tuples since they're basically append-only?
But I still think INSERTs should trigger autovacuum/analyze eventually (at
least when the table grows by 10%, because of
autovacuum_analyze_scale_factor),
and I'm confused why that doesn't seem to have happened. Seems like this is
probably hurting my queries' performance elsewhere.
So, my two big questions are:
- Is there a better way to write my query to hint away from the awful
nested loop join?
- Can anyone think of why autovacuum is declining to vacuum/analyze these
tables?
Thanks for reading, and for any help with this!
--
Lincoln Swaine-Moore
> I see an estimate for 1000 rows in your EXPLAIN output too, so you're
experiencing the same
> although in your case the estimate of 1000 might be more accurate. The
misestimation was causing
> significant performance problems for me.
> My solution was to wrap generate_series() in a custom functi
ut in terms of
solving my problem, this seems to do the trick.
Thanks again!
On Wed, Nov 8, 2023 at 5:45 PM David G. Johnston
wrote:
> On Wed, Nov 8, 2023 at 6:26 PM Lincoln Swaine-Moore <
> lswainemo...@gmail.com> wrote:
>
>> SELECT
>>
>
in various ways to match
data's index
- starting with data as the left table and periods on the right (I'd have
to do some post-processing here, but if it was fast it would be worth it)
The only thing I can think of left to do is to run 12 separate queries--one
for each month, which is a time period that blows up the nested loop just
enough that the query can finish--and then concatenate them after the fact.
But it seems like that's just using some knowledge about the structure of
the tables that I should be able to communicate to the planner, and I'd
really love to keep it all in the database!
Thanks for any and all help and suggestions.
--
Lincoln Swaine-Moore
8 at 3:12 PM David Rowley
> wrote:
> >
> > On 9 November 2018 at 08:35, Lincoln Swaine-Moore
> > wrote:
> > > My primary question is: why is this approach only possible (for data
> too
> > > large for memory) when using NOT EXISTS, and not when using NOT IN?
&
g, I would expect that to improve
performance of the NOT IN operation, since a single pass through that data
should reveal if there are any NULL values, at which point that information
could be used to short-circuit. So I am a bit baffled.
Thanks very much for your help!
Best,
Lincoln
--
Lincoln Swaine-Moore
01 read=40 written=8
-> Bitmap Index Scan on a_parent_id_idx4
(cost=0.00..130.39 rows=8273 width=0) (actual time=54.690..54.690 rows=5220
loops=1)
Index Cond: (a_partition4.parent_id = ANY
('{42242,50388,21916,13987,28708,4136,24617,31
34226,24506,40987,27162
> )
> ORDER BY "a"."tmstmp" DESC
> LIMIT 20;
>
> May be that limit 20 is not pushed to partitions in your version ?
> Regards
> PAscal
>
>
>
>
>
> --
> Sent from: http://www.postgresql-archive.org/PostgreSQL-performance-
> f2050081.html
>
>
--
Lincoln Swaine-Moore
Hi all,
I'm having a good bit of trouble making production-ready a query I wouldn't
have thought would be too challenging.
Below is the relevant portions of my table, which is partitioned on values
of part_key from 1-5. The number of rows is on the order of 10^7, and the
number of unique "parent
11 matches
Mail list logo