On Sun, Feb 28, 2010 at 5:28 AM, Greg Smith <g...@2ndquadrant.com> wrote: > The idea of the workaround is that if you have a single long-running query > to execute, and you want to make sure it doesn't get canceled because of a > vacuum cleanup, you just have it connect back to the master to keep an open > snapshot the whole time. That's basically the same idea that > vacuum_defer_cleanup_age implements, except you don't have to calculate a > value--you just hold open the snapshot to do it.
This sounds like it would require a separate connection for each client on the replica. That would be a pretty big burden for the master. Also, I'm not sure this actually works. When your client makes this additional connection to the master it's connecting at some transaction in the future from the slave's point of view. The master could have already vacuumed away some record which the snapshot the client gets on the slave will have in view. Even if you defer taking the snapshot on the slave until after connecting to the master it's still possibly "in the past" compared to the xmin on the master. I think to make this work you would have to connect to the master, establish a snapshot, then fetch pg_current_xlog_location(), then poll the slave and wait until it reaches that same position -- and only then perform your query taking care to establish a fresh snapshot for it such as by starting a new transaction on the slave. That's a lot of effort to go to. Still it's a handy practical trick even if it isn't 100% guaranteed to work. But I don't think it provides the basis for something we can bake in. -- greg -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers