On Tue, May 22, 2012 at 4:08 PM, Tom Lane <t...@sss.pgh.pa.us> wrote:
> Merlin Moncure <mmonc...@gmail.com> writes:
>> Basically, $subject says it all.  It's pretty easy to reproduce:
>> delete all the records from a large table and execute any sequentially
>> scanning query before autocvacuum comes around and cleans the table
>> up; the query will be uncancellable.  This can result in fairly
>> pathological behavior in i/o constrained systems because the query
>> will bog itself down writing out hint bits for minutes or hours
>> without any way to cancel or effective i/o throttling (unlike vacuum).
>
>> IMO, this should be backpatched, and is likely fixed by injecting an
>> interrupts check at a strategic location.  But where? I was thinking
>> in heapgetpage() but here are no checks elsehwere in heapam.c which is
>> a red flag.
>
> heapgetpage() seems like the most reasonable place to me, as there we'll
> only be making the check once per page not once per tuple.

ok. this fixes the issue:

diff --git a/src/backend/access/heap/heapam.c b/src/backend/access/heap/heapam.c
new file mode 100644
index 0d6fe3f..acef385
*** a/src/backend/access/heap/heapam.c
--- b/src/backend/access/heap/heapam.c
*************** heapgetpage(HeapScanDesc scan, BlockNumb
*** 287,292 ****
--- 287,299 ----

    LockBuffer(buffer, BUFFER_LOCK_UNLOCK);

+   /*
+    * We have to check for signals here because a long series of
+    * pages containing nothing but deleted tuples can cause control
+    * to remain in the scan loop for an unbounded amount of time.
+    */
+   CHECK_FOR_INTERRUPTS();
+
    Assert(ntup <= MaxHeapTuplesPerPage);
    scan->rs_ntuples = ntup;
  }

merlin

-- 
Sent via pgsql-bugs mailing list (pgsql-bugs@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-bugs

Reply via email to