On 12.12.2014 22:13, Robert Haas wrote: > On Fri, Dec 12, 2014 at 11:50 AM, Tomas Vondra <t...@fuzzy.cz> wrote: >> On 12.12.2014 14:19, Robert Haas wrote: >>> On Thu, Dec 11, 2014 at 5:46 PM, Tomas Vondra <t...@fuzzy.cz> wrote: >>> >>>> Regarding the "sufficiently small" - considering today's hardware, we're >>>> probably talking about gigabytes. On machines with significant memory >>>> pressure (forcing the temporary files to disk), it might be much lower, >>>> of course. Of course, it also depends on kernel settings (e.g. >>>> dirty_bytes/dirty_background_bytes). >>> >>> Well, this is sort of one of the problems with work_mem. When we >>> switch to a tape sort, or a tape-based materialize, we're probably far >>> from out of memory. But trying to set work_mem to the amount of >>> memory we have can easily result in a memory overrun if a load spike >>> causes lots of people to do it all at the same time. So we have to >>> set work_mem conservatively, but then the costing doesn't really come >>> out right. We could add some more costing parameters to try to model >>> this, but it's not obvious how to get it right. >> >> Ummm, I don't think that's what I proposed. What I had in mind was a >> flag "the batches are likely to stay in page cache". Because when it is >> likely, batching is probably faster (compared to increased load factor). > > How will you know whether to set the flag?
I don't know. I just wanted to make it clear that I'm not suggesting messing with work_mem (increasing it or whatewer). Or maybe I got your comments about memory overrun etc. wrong - now that I read it again, maybe it's meant just as an example of how difficult problem it is? regards Tomas -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers