Hi Alban,

The need for the vacuum full is because there were a problem with the daily
schedulled vacuum analyze and autovacuum regarding the max_fsm_pages. As it
was underestimated the vacuum process was not able to flag the pages to be
reused.

I've cancelled the vacuum full and will think another approach. Maybe a
CLUSTER can do the work. Will start a CLUSTER and see if I can check the
progress looking the size of the new table relfilenode. It will probably
have less than 102 GB.

Thank you!

2010/9/5 Alban Hertroys <dal...@solfertje.student.utwente.nl>

> On 5 Sep 2010, at 12:13, Carlos Henrique Reimer wrote:
>
> > Hi,
> >
> > I need to shrick a table with 102 GB and approximately 380.000.000 rows.
>
> What exactly are you trying to accomplish? You may be saving some space
> temporarily by running vacuum full and reindex, but the database size will
> probably grow back to its original size quite quickly once it sees some use
> (inserts/deletes).
>
> A table that size usually gets partitioned into smaller tables. How to
> partition your tables depends on how your data is organised and behaves,
> that's a different discussion. You can find documentation here:
> http://www.postgresql.org/docs/8.4/interactive/ddl-partitioning.html
>
> > There is a vacuum full running for 13 hours and the only messages a get
> are:
> >
> > INFO:  vacuuming "public.posicoes_controles"
> > INFO:  "posicoes_controles": found 43960 removable, 394481459
> nonremovable row versions in 13308976 pages
> > DETAIL:  0 dead row versions cannot be removed yet.
> > Nonremovable row versions range from 193 to 217 bytes long.
> > There were 136382074 unused item pointers.
> > Total free space (including removable row versions) is 27663157952 bytes.
> > 2884123 pages are or will become empty, including 0 at the end of the
> table.
> > 4167252 pages containing 27597464344 free bytes are potential move
> destinations.
> > CPU 100.81s/101.45u sec elapsed 3347.95 sec.
> > The relfilenode points to the 38611 object and I see that there are 102
> 38611.n files in the data directory representing this table. From this 102
> 38611.n files only 53 were updated since the vacuum full process started.
> >
> > I need to answer these questions:
> >
> > a) Is there a way to have a general idea if the process is at the end? Is
> there a way to identify the progess of the work?
>
> Well, you already noticed it's at the 53rd file of 102 files. It's about
> half-way then I guess.
>
> > b) How much space will be shrank at the time vacuum full finishes?
>
> According to the above up to 27GB (probably less), not counting the index
> bloat you're generating by running vacuum full.
>
> Alban Hertroys
>
> --
> Screwing up is an excellent way to attach something to the ceiling.
>
>
> !DSPAM:1165,4c83780910401779318433!
>
>
>


-- 
Reimer
47-3347-1724 47-9183-0547 msn: carlos.rei...@opendb.com.br

Reply via email to