On Tue, Feb 14, 2012 at 12:25 PM, Greg Smith <g...@2ndquadrant.com> wrote:
> On 02/14/2012 01:45 PM, Greg Smith wrote:
>>
>> scale=1000, db is 94% of RAM; clients=4
>> Version TPS
>> 9.0  535
>> 9.1  491 (-8.4% relative to 9.0)
>> 9.2  338 (-31.2% relative to 9.1)
>
>
> A second pass through this data noted that the maximum number of buffers
> cleaned by the background writer is <=2785 in 9.0/9.1, while it goes as high
> as 17345 times in 9.2.

There is something strange about the data for Set 4 (9.1) at scale 1000.

The number of buf_alloc varies a lot from run to run in that series
(by a factor of 60 from max to min).

But the TPS doesn't vary by very much.

How can that be?  If a transaction needs a page that is not in the
cache, it needs to allocate a buffer.  So the only thing that could
lower the allocation would be a higher cache hit rate, right?  How
could there be so much variation in the cache hit rate from run to run
at the same scale?


Cheers,

Jeff

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Reply via email to