On Mon, 2008-09-29 at 11:24 -0400, Tom Lane wrote: > Simon Riggs <[EMAIL PROTECTED]> writes: > > On Mon, 2008-09-29 at 10:13 -0400, Tom Lane wrote: > >> ... If we crash and restart, we'll have to get to the end > >> of this file before we start letting backends in; which might be further > >> than we actually got before the crash, but not too much further because > >> we already know the whole WAL file is available. > > > Don't want to make it per file though. Big systems can whizz through WAL > > files very quickly, so we either make it a big number e.g. 255 files per > > xlogid, or we make it settable (and recorded in pg_control). > > I think you are missing the point I made above. If you set the > okay-to-resume point N files ahead, and then the master stops generating > files so quickly, you've got a problem --- it might be a long time until > the slave starts letting backends in after a crash/restart. > > Fetching a new WAL segment from the archive is expensive enough that an > additional write/fsync per cycle doesn't seem that big a problem to me. > There's almost certainly a few fsync-equivalents going on in the > filesystem to create and delete the retrieved segment files.
Didn't miss yer point, just didn't agree. :-) I'll put it at one (1) and then wait for any negative perf reports. No need to worry about things like that until later. -- Simon Riggs www.2ndQuadrant.com PostgreSQL Training, Services and Support -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers