On Thu, Sep 25, 2014 at 10:02 AM, Merlin Moncure <mmonc...@gmail.com> wrote: > On Thu, Sep 25, 2014 at 8:51 AM, Robert Haas <robertmh...@gmail.com> wrote: >> 1. To see the effect of reduce-replacement-locking.patch, compare the >> first TPS number in each line to the third, or the second to the >> fourth. At scale factor 1000, the patch wins in all of the cases with >> 32 or more clients and exactly half of the cases with 1, 8, or 16 >> clients. The variations at low client counts are quite small, and the >> patch isn't expected to do much at low concurrency levels, so that's >> probably just random variation. At scale factor 3000, the situation >> is more complicated. With only 16 bufmappinglocks, the patch gets its >> biggest win at 48 clients, and by 96 clients it's actually losing to >> unpatched master. But with 128 bufmappinglocks, it wins - often >> massively - on everything but the single-client test, which is a small >> loss, hopefully within experimental variation. >> >> Comments? > > Why stop at 128 mapping locks? Theoretical downsides to having more > mapping locks have been mentioned a few times but has this ever been > measured? I'm starting to wonder if the # mapping locks should be > dependent on some other value, perhaps the # of shared bufffers...
Good question. My belief is that the number of buffer mapping locks required to avoid serious contention will be roughly proportional to the number of hardware threads. At the time the value 16 was chosen, there were probably not more than 8-core CPUs in common use; but now we've got a machine with 64 hardware threads and, what do you know but it wants 128 locks. I think the long-term solution here is that we need a lock-free hash table implementation for our buffer mapping tables, because I'm pretty sure that just cranking the number of locks up and up is going to start to have unpleasant side effects at some point. We may be able to buy a few more years by just cranking it up, though. -- Robert Haas EnterpriseDB: http://www.enterprisedb.com The Enterprise PostgreSQL Company -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers