Le 2012-09-13 à 16:51, David Salisbury a écrit :

> 
> It looks to me like you're misusing git..
> 
> You should only git init once, and always use that directory.
> Then pg_dump, which should create one file per database
> with the file name you've specified.
> Not sure of the flags but I'd recommend plain text format.
> 
> I'm also unsure what you mean by network traffic, as you don't
> mention a remote repository, but there nice visual tools
> for you to see the changes to files between you're committed
> objects.  git init.. will more than likely lose all changes
> to files.

I was just running a test: looking at a way to transfer large amounts of data 
for backup purposes with a tool that's especially suited for deltas. I know 
about rsync, but this was a thought experiment. I was only surprised at the 
restriction of pg_dump that must create a new directory every time. Was looking 
for a rationale.

Also, git init is a safe operation: within a repository, git init says it 
reinitialized, but does not lose files. Haven't tried with local changes, or a 
dirty index.

Finally, when NOT using the plain text format, pg_restore can restore more than 
one table at a time, using the --jobs flag. On a multi-core, multi-spindle 
machine, this can cut down the restore time tremendously.

Bye,
François

-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general

Reply via email to