I set the toast.autovacuum_vacuum_scale_factor to 0 and the
toast.autovacuum_vacuum threshold to 1 so it should be enough to force
a vacuum after the nightly deletes. Now , I changed the cost limit and the
cost delay, my question is if I have anything else to do ? My
maintenance_work_mem is abo
If there are high number of updates during normal daytime processes, then
yes you need to ensure autovacuum is handling this table as needed. If the
nightly delete is the only major source of bloat on this table, then
perhaps running a manual vacuum keeps things tidy after the big delete.
Granted,
Hi,
On Fri, Jan 18, 2019 at 02:12:23PM +, Luis Carril wrote:
> we noticed that in the presence of a schema with many partitions the
> jitting overhead penalizes the total query execution time so much that the
> planner should have decided not to jit at all. For example without jitting we
On Sun, 27 Jan 2019 at 06:29, legrand legrand
wrote:
>
> Hi,
>
> There are many tools:
> - (core) extension pg_stat_statements will give you informations of SQL
> executions,
I've had enormous success using pg_stat_statements and gathering the
data over time in Prometheus. That let me build a das
No I don't run vacuum manually afterwards because the autovacuum should
run. This process happens every night. Yes , bloating is an issue because
the table grow and take a lot of space on disk. Regarding the autovacuum,
I think that it sleeps too much time (17h) during it's work, don't you
think s
Thanks, that context is very enlightening. Do you manually vacuum after
doing the big purge of old session data? Is bloat causing issues for you?
Why is it a concern that autovacuum's behavior varies?
*Michael Lewis*
On Thu, Feb 14, 2019 at 12:41 PM Mariel Cherkassky <
mariel.cherkas...@gmail.co
Maybe by explaining the tables purpose it will be cleaner. The original
table contains rows for sessions in my app. Every session saves for itself
some raw data which is saved in the toasted table. We clean old sessions
(3+ days) every night. During the day sessions are created so the size of
the t
Yeah, the planner doesn't know that call_created_date can be limited on
both tables unless you tell it specify it in the where condition as Laurenz
said on another thread.
*Michael Lewis*
On Thu, Feb 14, 2019 at 7:35 AM suganthi Sekar
wrote:
> Both table Portion by same column call_created_da
It is curious to me that the tuples remaining count varies so wildly. Is
this expected?
*Michael Lewis*
On Thu, Feb 14, 2019 at 9:09 AM Mariel Cherkassky <
mariel.cherkas...@gmail.com> wrote:
> I checked in the logs when the autovacuum vacuum my big toasted table
> during the week and I wanted
I checked in the logs when the autovacuum vacuum my big toasted table
during the week and I wanted to confirm with you what I think :
postgresql-Fri.log:2019-02-08 05:05:53 EST 24776 LOG: automatic vacuum
of table "myDB.pg_toast.pg_toast_1958391": index scans: 8
postgresql-Fri.log- pages: 2253 r
Both table Portion by same column call_created_date
From: Michael Lewis
Sent: 14 February 2019 19:35:48
To: suganthi Sekar
Cc: Justin Pryzby; pgsql-performance@lists.postgresql.org
Subject: Re: constraint exclusion with ineq condition (Re: server hardware
tuning.
HI ,
Ok thanks.
Regards,
Suganthi Sekar
From: Laurenz Albe
Sent: 14 February 2019 18:07:49
To: suganthi Sekar; pgsql-performance@lists.postgresql.org
Subject: Re: partition pruning
suganthi Sekar wrote:
> i am using Postgresql 11, i have 2 partition table
On Thu, Feb 14, 2019 at 01:37:49PM +0100, Laurenz Albe wrote:
> There is no condition on the table "call_report2" in your query,
> so it is not surprising that all partitions are scanned, right?
Some people find it surprising, since: a.call_id=b.call_id
suganthi Sekar wrote:
> > explain analyze
>
What are these two tables partitioned by?
On Thu, Feb 14, 2019, 5:03 AM suganthi Sekar Hi,
>
> Thanks, i know if explicitly we give in where condition it is working.
>
> i thought with below parameter in Postgresq11 this issue is fixed ?
>
> * enable_partitionwise_join to 'on';*
>
> * what is th
suganthi Sekar wrote:
> i am using Postgresql 11, i have 2 partition table , when i joined both
> table in query
> a table its goes exact partition table , but other table scan all partition
>
> please clarify on this .
>
> i have enabled below parameter on in configuration file
> Note : a
Hi,
Thanks, i know if explicitly we give in where condition it is working.
i thought with below parameter in Postgresq11 this issue is fixed ?
enable_partitionwise_join to 'on';
what is the use of enable_partitionwise_join to 'on';
Thanks for your response.
Regards
Suganthi Sekar
HI,
u mean the below parameter need to set on . its already on only.
alter system set constraint_exclusion to 'on';
Regards,
Suganthi Sekar
From: Justin Pryzby
Sent: 14 February 2019 15:35:33
To: suganthi Sekar
Cc: pgsql-performance@lists.postgresql.org
On Thu, Feb 14, 2019 at 10:38:36AM +, suganthi Sekar wrote:
> u mean the below parameter need to set on . its already on only.
> alter system set constraint_exclusion to 'on';
No, I said:
> You can work around it by specifying the same condition on
> b.call_created_date:
> > AND b.call_c
On Thu, Feb 14, 2019 at 09:38:52AM +, suganthi Sekar wrote:
> i am using Postgresql 11, i have 2 partition table , when i joined both
> table in query
> a table its goes exact partition table , but other table scan all partition
>
> please clarify on this .
>
> Example :
>
> explain anal
Hi Team ,
i am using Postgresql 11, i have 2 partition table , when i joined both table
in query
a table its goes exact partition table , but other table scan all partition
please clarify on this .
i have enabled below parameter on in configuration file
Note : alter system set enable_parti
20 matches
Mail list logo