Hi,

I think this is ready to go. I was wondering why it merely doubles the number of buffers, as described by previous comments. That seems like a very tiny increase, so considering how much the hardware grew over the last few years it'd probably fail to help some of the large boxes.

But it turns out that's not what the patch does. The change is this

> -  return Min(16, Max(4, NBuffers / 1024));
> +  return Min(256, Max(4, NBuffers / 512));

So it does two things - (a) it increases the maximum from 16 to 256 (so 16x), and (b) it doubles the speed how fast we get there. Until now we add 1 buffer per 1024 shared buffers, and the maximum would be reached with 128MB. The patch lowers the steps to 512, and the maximum to 1GB.

So this actually increases the number of commit_ts buffers 16x, not 2x. That seems reasonable, I guess. The increase may be smaller for systems with less that 1GB shared buffers, but IMO that's a tiny minority of production systems busy enough for this patch to make a difference.

The other question is of course what overhead could this change have on workload that does not have issues with commit_ts buffers (i.e. it's using commit_ts, but would be fine with just the 16 buffers). But my guess is this is negligible, based on how simple the SLRU code is and my previous experiments with SLRU.

So +1 to just get this committed, as it is.


regards

--
Tomas Vondra
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


Reply via email to