Hi,
Yeah with 500K shared buffers and multiples of backends, we could achieve
noticeable savings with this. And that is why it will be difficult to show
the performance gains by running just pgbench/dbt2 on medium scale machines.
One way of looking at this could be that memory saved here, could
Tom Lane wrote:
> NikhilS <[EMAIL PROTECTED]> writes:
>> What is the opinion of the list as to the best way of measuring if the
>> following implementation is ok?
>> http://archives.postgresql.org/pgsql-hackers/2007-01/msg00752.php
>> As mentioned in earlier mails, this will reduce the per-backend
NikhilS <[EMAIL PROTECTED]> writes:
> What is the opinion of the list as to the best way of measuring if the
> following implementation is ok?
> http://archives.postgresql.org/pgsql-hackers/2007-01/msg00752.php
> As mentioned in earlier mails, this will reduce the per-backend usage of
> memory by a
Hi,
What is the opinion of the list as to the best way of measuring if the
following implementation is ok?
http://archives.postgresql.org/pgsql-hackers/2007-01/msg00752.php
As mentioned in earlier mails, this will reduce the per-backend usage of
memory by an amount which will be a fraction (sin
Added to TODO:
* Consider decreasing the amount of memory used by PrivateRefCount
http://archives.postgresql.org/pgsql-hackers/2006-11/msg00797.php
http://archives.postgresql.org/pgsql-hackers/2007-01/msg00752.php
---
Hi,
Most likely a waste of development effort --- have you got any evidence
of a real effect here? With 200 max_connections the size of the arrays
is still less than 10% of the space occupied by the buffers themselves,
ergo there isn't going to be all that much cache-thrashing compared to
what