On 4/21/15 4:07 PM, Peter Eisentraut wrote:
On 4/21/15 4:45 PM, Jim Nasby wrote: In order for a background worker to keep up with some of the workloads that have been presented as counterexamples, you'd need multiple background workers operating in parallel and preferring to work on certain parts of a table. That would require a lot more sophisticated job management than we currently have for, say, autovacuum.
My thought was that the foreground queries would send page IDs to the bgworker via a shmq. If the queries have to do much waiting at all on IO then I'd expect the bgworker to be able to keep pace with a bunch of them since it's just grabbing buffers that are already in the pool (and only those in the pool; it wouldn't make sense for it to pull it back from the kernel, let alone disk).
We'd need to code this so that if a queue fills up the query doesn't block; we just skip that opportunity to prune. I think that'd be fine.
-- Jim Nasby, Data Architect, Blue Treble Consulting Data in Trouble? Get it in Treble! http://BlueTreble.com -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers