On Tue, Dec 3, 2013 at 12:14 PM, Bruce Momjian wrote:
> On Wed, Apr 24, 2013 at 03:33:42PM -0400, Andrew Dunstan wrote:
> >
> > On 04/23/2013 07:53 PM, Timothy Garnett wrote:
> ...
> > >Attached is two diffs off of the REL9_2_4 tag that I've been
> > >
> If you need something like this short term, we actually found a way to do it
> ourselves for a migration we performed back in October. The secret is xargs
> with the -P option:
>
> xargs -I{} -P 8 -a table-list.txt \
> bash -c "pg_dump -Fc -t {} my_db | pg_restore -h remote -d my_db"
>
> Fill
On Wed, Apr 24, 2013 at 5:47 PM, Joachim Wieland wrote:
> On Wed, Apr 24, 2013 at 4:05 PM, Stefan Kaltenbrunner <
> ste...@kaltenbrunner.cc> wrote:
>
>> > What might make sense is something like pg_dump_restore which would have
>> > no intermediate storage at all, just pump the data etc from one
As the OP, I'll just note that my organization would definitely find use
for a parallel migrator tool as long as it supported doing a selection of
tables (i.e. -t / -T) in addition to the whole database and it supported or
we were able to patch in an option to cluster as part of the migration (the
Hi All,
Currently the -j option to pg_restore, which allows for parallelization in
the restore, can only be used if the input file is a regular file and not,
for ex., a pipe. However this is a pretty common occurrence for us
(usually in the form of pg_dump | pg_restore to copy an individual datab
d282cb7925 Mon Sep 17 00:00:00 2001
From: Timothy Garnett
Date: Fri, 10 Feb 2012 16:21:32 -0500
Subject: [PATCH] Support for pg_dump to dump tables in cluster order if a
clustered index is defined on the table, a little hacked in
with how the data is passed around and how the order is
pulled out