en
> -Original Message-
> From: Tom Lane [mailto:t...@sss.pgh.pa.us]
> Sent: Friday, October 22, 2010 14:31
> To: karsten vennemann
> Cc: pgsql-general@postgresql.org
> Subject: Re: [GENERAL] auto fill serial id field with default
> value in copy operation
>
>
#x27; using delimiters ',' with null as
'';
Karsten Vennemann
Terra GIS LTD
Seattle, WA 98112
USA
<http://www.terragis.net> www.terragis.net
> Note that cluster on a randomly ordered large table can be
> prohibitively slow, and it might be better to schedule a
> short downtime to do the following (pseudo code)
> alter table tablename rename to old_tablename; create table
> tablename like old_tablename; insert into tablename select *
ected in an IRC session with some gurus some
weeks ago.
Main question now is why is my dump /restore not working what am I doing wrong ?
Thanks
Karsten
_
From: pgsql-general-ow...@postgresql.org
[mailto:pgsql-general-ow...@postgresql.org] On Behalf Of Pavel Stehule
Sent: Tuesday,
I have to write a 700 GB large database to a dump to clean out a lot of
dead records on an Ubuntu server with postgres 8.3.8. What is the proper
procedure to succeed with this - last time the dump stopped at 3.8 GB size I
guess. Should I combine the -Fc option of pg_dump and and the spli
I have to write a 700 GB large database to a dump to clean out a lot of dead
records on an Ubuntu server with postgres 8.3.8. What is the proper procedure
to succeed with this - last time the dump stopped at 3.8 GB size I guess.
Should I combine the -Fc option of pg_dump and and the split comman