On Mon, Mar 1, 2010 at 5:32 PM, Josh Berkus <j...@agliodbs.com> wrote: > On 2/28/10 7:12 PM, Robert Haas wrote: >>> However, I'd still like to hear from someone with the requisite >>> > technical knowledge whether capturing and retrying the current query in >>> > a query cancel is even possible. >> >> I'm not sure who you want to hear from here, but I think that's a dead end. > > "dead end" as in "too hard to implement"? Or for some other reason?
I think it's probably too hard to implement for the extremely limited set of circumstances in which it can work. See the other responses for some of the problems. There are others, too. Suppose that the plan for some particular query is to read a table with a hundred million records, sort it, and then do whatever with the results. After reading the first 99 million records, the transaction is cancelled and we have to start over. Maybe someone will say, fine, no problem - but it's certainly going to be user-visible. Especially if we retry more than once. I think we should focus our efforts initially on reducing the frequency of spurious cancels. What we're essentially trying to do here is refute the proposition "the WAL record I just replayed might change the result of this query". It's possibly equivalent to the halting problem (and certainly impossibly hard) to refute this proposition in every case where it is in fact false, but it sounds like what we have in place right now doesn't come close to doing as well as can be done. I just read through the current documentation and it doesn't really seem to explain very much about how HS decides which queries to kill. Can someone try to flesh that out a bit? It also uses the term "buffer cleanup lock", which doesn't seem to be used anywhere else in the documentation (though it does appear in the source tree, including README.HOT). ...Robert -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers