Bruce McAlister wrote:
Hi All,
Is it at all possible to "roll forward" a database with archive logs
when it has been recovered using a dump?
Assuming I have the archive_command and archive_timeout parameters set
on our "live" system, then I follow these steps:
[1] pg_dump -d database > /backup/database.dump,
[2] initdb new instance on recovery machine,
[3] psql -f ./database.dump,
[4] shutdown new recovered db,
[5] create recovery.conf,
[6] copy WAL's from time of backup till time of recovery to temp dir
[7] start postgresql
No. WALs track disk blocks not table-rows, so you need a file-level
backup of the original installation.
In my mind I think I will have some problems somewhere along the way,
however I don't know enough about the internals of PostgreSQL to
actually see if there are additional steps I need to follow.
In our environment it takes approx 2 hours to perform a PIT backup of
our live system:
[1] select pg_start_backup('labe;')
[2] cpio & compress database directory (exclude wals)
[3] select pg_stop_backup()
However, if we perform a plain dump (pg_dump/pg_dumpall) we can dump the
whole lot in 15 minutes. For us this is more efficient.
It sounds like there's something strange with your setup if it's quicker
for pg_dump to read your data than cp. Do you have *lots* of indexes, or
perhaps a lot of dead rows? What's the bottleneck with cpio+compress -
cpu/disk/network?
The question is, how can we roll forward from our time of pg_dump, to
our most recent WAL (in case of failure - touch wood).
Can't be done I'm afraid.
--
Richard Huxton
Archonet Ltd
---------------------------(end of broadcast)---------------------------
TIP 9: In versions below 8.0, the planner will ignore your desire to
choose an index scan if your joining column's datatypes do not
match