Look also at pg_stat_all_tables.n_dead_tup for tables which are candidates
for vacuuming.
On 4/10/19 11:49 PM, Perumal Raj wrote:
Thanks Kevin for the inputs,
In my Case there are 500+ Tables and biggest chunk 30GB ( Table only) +
its indexes.
So i have created 6 batches and executed in par
Thanks Kevin for the inputs,
In my Case there are 500+ Tables and biggest chunk 30GB ( Table only) + its
indexes.
So i have created 6 batches and executed in parallel . All my scripts
completed in 2 Hours and my DB size came down from 500GB to 300GB.
Yes i do see CPU spike, But i did whole activi
On 2019-04-03 19:42:03 +0400, rihad wrote:
> > And future updates can reuse it, too (an update is very similar to an
> > insert+delete).
>
> Hm, then it's strange our DB takes 6 times as much space compared to freshly
> restored one (only public schema is considered).
This is indeed strange if yo
From: Perumal Raj
So conclude the requirement here , The only way to parallelism is multiple
script. And no need to do REINDEX exclusively.
Question : Do we need to consider Table dependencies while preparing script in
order to avoid table locks during vacuum full ?
We have a small bash scrip
On 4/3/19 3:45 PM, Perumal Raj wrote:
Hi Stephen
Thanks for the response ,
Version : 9.2
We never ran VACUUM FULL in the past, All we are doing just manual vacuum
( Weekly ) .
Based on the Observation ( test run ) , we were able to reclaim 150 GB out
of 500 GB .
We are heading to a planned
Hi Stephen
Thanks for the response ,
Version : 9.2
We never ran VACUUM FULL in the past, All we are doing just manual vacuum (
Weekly ) .
Based on the Observation ( test run ) , we were able to reclaim 150 GB out
of 500 GB .
We are heading to a planned down time soon , So thinking to run FULL
d
> Ideally VACUUM FULL should not require a giant lock on the table.
It is a massively expensive operation, regardless. Not sure if it is something
you want to run in production outside a maintenance window.
I would argue that frequent vacuum full is an antipattern. This will become a
matter of
Hi All
Thanks for all your valuable inputs,
Here is some more data,
Though we have 150 GB free space spread across 500 Tables , Every
alternative day DB is growing with 1 GB rate.
Also,We have manual vacuum job scheduled to run weekly basis, So seems to
be space is not reusing all the time ?
S
And future updates can reuse it, too (an update is very similar to an
insert+delete).
Hm, then it's strange our DB takes 6 times as much space compared to
freshly restored one (only public schema is considered).
Not if autovacuum has a chance to run between updates.
Ours is run regularly,
On 2019-04-03 18:49:02 +0400, rihad wrote:
> On 04/03/2019 06:40 PM, Michael Lewis wrote:
> > "Sometimes a table's usage pattern involves much more updates than
> > inserts, which gradually uses more and more unused space that is never
> > used again by postgres, and plain autovacuuming doesn't ret
On 04/03/2019 06:40 PM, Michael Lewis wrote:
"Sometimes a table's usage pattern involves much more updates than
inserts, which gradually uses more and more unused space that is never
used again by postgres, and plain autovacuuming doesn't return it to the
OS."
Can you expound on that? I thought
"Sometimes a table's usage pattern involves much more updates than
inserts, which gradually uses more and more unused space that is never
used again by postgres, and plain autovacuuming doesn't return it to the
OS."
Can you expound on that? I thought that was exactly what autovacuum did for
old ve
On 2019-04-03 13:12:56 +0400, rihad wrote:
> Ideally VACUUM FULL should not require a giant lock on the table.
[...]
> Since rewriting a table is a completely internal operation from
> clients' POV, hopefully one day we will see a concurrent version of
> vacuum full.
There are (at least) pg_repack
Does *every* table have *so much* free space that it's impractical to
just
let the files just get refilled by normal usage?
Ideally VACUUM FULL should not require a giant lock on the table.
Sometimes a table's usage pattern involves much more updates than
inserts, which gradually uses more an
Perumal Raj wrote:
> We are planning to reclaim unused space from 9.2 Version postgres Cluster,
>
> Method : VACUUM FULL
> DB Size : 500 GB
> Expected space to reclaim 150 GB
> work_mem : 250 MB
> maintenance_work_mem : 20 GB
>
> Question :
>
> 1. vacuumdb --j option (Parallel) not available fo
On 4/3/19 12:50 AM, Perumal Raj wrote:
Hi ALL
We are planning to reclaim unused space from 9.2 Version postgres Cluster,
Method : VACUUM FULL
Does *every* table have *so much* free space that it's impractical to just
let the files just get refilled by normal usage?
DB Size : 500 GB
Expec
16 matches
Mail list logo