On Tue, Dec 9, 2014 at 12:46 AM, Amit Kapila <amit.kapil...@gmail.com> wrote: >> I agree with this. For a first version, I think it's OK to start a >> worker up for a particular sequential scan and have it help with that >> sequential scan until the scan is completed, and then exit. It should >> not, as the present version of the patch does, assign a fixed block >> range to each worker; instead, workers should allocate a block or >> chunk of blocks to work on until no blocks remain. That way, even if >> every worker but one gets stuck, the rest of the scan can still >> finish. >> > I will check on this point and see if it is feasible to do something on > those lines, basically currently at Executor initialization phase, we > set the scan limits and then during Executor Run phase use > heap_getnext to fetch the tuples accordingly, but doing it dynamically > means at ExecutorRun phase we need to reset the scan limit for > which page/pages to scan, still I have to check if there is any problem > with such an idea. Do you any different idea in mind?
Hmm. Well, it looks like there are basically two choices: you can either (as you propose) deal with this above the level of the heap_beginscan/heap_getnext API by scanning one or a few pages at a time and then resetting the scan to a new starting page via heap_setscanlimits; or alternatively, you can add a callback to HeapScanDescData that, if non-NULL, will be invoked to get the next block number to scan. I'm not entirely sure which is better. -- Robert Haas EnterpriseDB: http://www.enterprisedb.com The Enterprise PostgreSQL Company -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers