Brian,
Those are very interesting ideas. Thanks. I've been playing around with
pg_dump. Modifying it to selectively dump/restore tables and columns is
pretty easy. But as you say, changing the data content within the data
buffers to reflect varying column values, changed column types, and new
colu
pg_dump by default dumps to STDOUT, which you should use in a pipeline to perform any modifications. To me this seems pretty tricky, but should be doable. Modifying pg_dump really strikes me as the wrong way to go about it. Pipelines operate in memory, and should be very fast, depending on how y
> > Is it possible to compile-link together frontend pg_dump code with
> > backend code from copy.c?
>
> No. Why do you think you need to modify pg_dump at all?
>
pg_dump and pg_restore provide important advantages for upgrading a
customer's database on site:
They are fast. I want to minimize do
"[EMAIL PROTECTED]" <[EMAIL PROTECTED]> writes:
> Is it possible to compile-link together frontend pg_dump code with
> backend code from copy.c?
No. Why do you think you need to modify pg_dump at all?
regards, tom lane
---(end of broadcast)---
After further reading, I'm wondering if I should instead try to use
libpq calls like PQgetCopyData, PQputCopyData, and PQputCopyEnd.
Would they be a viable alternative to provide both the speed and
flexibility I'm looking for?
-Lynn
---(end of broadcast)-
My first ever newsgroup PostgreSQL question... I want to move data
between some very large databases (100+ gb) of different schema at our
customer sites. I cannot expect there to be much working partition
space, so the databases cannot exist simultaneously. I am also
restricted to hours, not days,