Re: [GENERAL] dump of 700 GB database

2010-02-17 Thread karsten vennemann
> Note that cluster on a randomly ordered large table can be > prohibitively slow, and it might be better to schedule a > short downtime to do the following (pseudo code) > alter table tablename rename to old_tablename; create table > tablename like old_tablename; insert into tablename select *

Re: [GENERAL] dump of 700 GB database

2010-02-17 Thread Scott Marlowe
On Wed, Feb 17, 2010 at 3:44 PM, karsten vennemann wrote: > > >>> vacuum should clean out the dead tuples, then cluster on any large tables > >>> that are bloated will sort them out without needing too much temporary > >>> space. > > Yes okĀ  am running a vacuum full on a large table (150GB) and

Re: [GENERAL] dump of 700 GB database

2010-02-17 Thread karsten vennemann
February 09, 2010 23:30 To: karsten vennemann Cc: pgsql-general@postgresql.org Subject: Re: [GENERAL] dump of 700 GB database Hello 2010/2/10 karsten vennemann I have to write a 700 GB large database to a dump to clean out a lot of dead records on an Ubuntu server with postgres 8.3.8. Wh

Re: [GENERAL] dump of 700 GB database

2010-02-09 Thread John R Pierce
karsten vennemann wrote: I have to write a 700 GB large database to a dump to clean out a lot of dead records on an Ubuntu server with postgres 8.3.8. What is the proper procedure to succeed with this - last time the dump stopped at 3.8 GB size I guess. Should I combine the -Fc option of pg_dum

Re: [GENERAL] dump of 700 GB database

2010-02-09 Thread Pavel Stehule
Hello 2010/2/10 karsten vennemann > I have to write a 700 GB large database to a dump to clean out a lot of > dead records on an Ubuntu server with postgres 8.3.8. What is the proper > procedure to succeed with this - last time the dump stopped at 3.8 GB size I > guess. Should I combine the -Fc