On Thu, 5 Jul 2007, Tom Lane wrote:

This would give us a safety margin such that buffers_to_clean is not less than the largest demand observed in the last 100 iterations...and it takes quite a while for the memory of a demand spike to be forgotten completely.

If you tested this strategy even on a steady load, I'd expect you'll find there are large spikes in allocations during the occasional period where everything is just right to pull a bunch of buffers in, and if you let that max linger around for 100 iterations you'll write a large number of buffers more than you need. That's what I saw when I tried to remember too much information about allocation history in the version of the auto LRU tuner I worked on. For example, with 32000 buffers, with pgbench trying to UPDATE as fast as possible I sometimes hit
1500 allocations in an interval, but the steady-state allocation level
was closer to 500.

I ended up settling on max(moving average of the last 16,most recent allocation), and that seemed to work pretty well without being too wasteful from excessive writes. Playing with multiples of 2, 8 was definately not enough memory to smooth usefully, while 32 seemed a little sluggish on the entry and wasteful on the exit ends.

At the default interval, 16 iterations is looking back at the previous 3.2 seconds. I have a feeling the proper tuning for this should be time-based, where you would decide how long ago to consider looking back for and compute the iterations based on that.

--
* Greg Smith [EMAIL PROTECTED] http://www.gregsmith.com Baltimore, MD

---------------------------(end of broadcast)---------------------------
TIP 1: if posting/reading through Usenet, please send an appropriate
      subscribe-nomail command to [EMAIL PROTECTED] so that your
      message can get through to the mailing list cleanly

Reply via email to