On 20/06/2013 03:25, Tatsuo Ishii wrote: >> On Wed, Jun 19, 2013 at 8:40 PM, Tatsuo Ishii <is...@postgresql.org> wrote: >>>> On Wed, Jun 19, 2013 at 6:20 PM, Stephen Frost <sfr...@snowman.net> wrote: >>>>> * Claudio Freire (klaussfre...@gmail.com) wrote: [...] >> >> The only bottleneck here, is WAL archiving. This assumes you can >> afford WAL archiving at least to a local filesystem, and that the WAL >> compressor is able to cope with WAL bandwidth. But I have no reason to >> think you'd be able to cope with dirty-map updates anyway if you were >> saturating the WAL compressor, as the compressor is more efficient on >> amortized cost per transaction than the dirty-map approach. > > Thank you for detailed explanation. I will think more about this.
Just for the record, I was mulling over this idea since a bunch of month. I even talked about that with Dimitri Fontaine some weeks ago with some beers :) My idea came from a customer during a training explaining me the difference between differential and incremental backup in Oracle. My approach would have been to create a standalone tool (say pg_walaggregate) which takes a bunch of WAL from archives and merge them in a single big file, keeping only the very last version of each page after aggregating all their changes. The resulting file, aggregating all the changes from given WAL files would be the "differential backup". A differential backup resulting from a bunch of WAL between W1 and Wn would help to recover much faster to the time of Wn than replaying all the WALs between W1 and Wn and saves a lot of space. I was hoping to find some time to dig around this idea, but as the subject rose here, then here are my 2¢! Cheers, -- Jehan-Guillaume (ioguix) de Rorthais -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers