Jan, > I too don't think that this approach will retain the checkpoing smooting > effect, the current implementation has. > > The real problem is that the "cleaner" the buffer pool is, the longer > the scan for dirty buffers will take because the dirty blocks tend to be > at the very end of the scan order. The real solution for this would be > not to scan the whole pool, but to maintain a separate chain of only > dirty buffers in LRU order.
Hmmm, I've not seen this. For example, with people who are having trouble with checkpoint spikes on Linux, I've taken to recommending that they call sync() (via cron) every 5-10 seconds (thanks, Bruce, for suggestion!). Believe it or not, this does help smooth out the spikes and give better overall performance in a many-small-writes situation. Simon, one of the problems with the OSDL-DBT2 test is that it's too steady. DBT2 gives a constant stream of small writes at a regular, predictable rate. This does not, in fact, match any real-world application I know. To allow DBT2 to be used for real bgwriter benchmarking, Mark would need to change the following: 1) Randomize the timing of the commits, so that sometimes there is only 30 writes/minute, and other times there is 300. A timing pattern that would produce a "sine wave" with occasional random spikes would be best; in my experience, OLTP applications tend to have wave-like spikes and lulls. 2) Include a sprinkling of random or regular "large writes" which affect several tables and 1000's of rows. For example, once per hour, change 10,000 pending orders to "shipped", and archive 10,000 "old orders" to an archive table. However, this would require "splitting" DBT2; there's the DBT2 which simulates the TPC-C test, and the DBT2 which will help us tune for real-world applications. The two tests will not be the same. -- Josh Berkus Aglio Database Solutions San Francisco ---------------------------(end of broadcast)--------------------------- TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]