I wrote:
> So it seems it's time to start thinking about how to reduce contention
> for the LockMgrLock.
> ...
> The best idea I've come up with after a bit of thought is to replace the
> shared lock table with N independent tables representing partitions of the
> lock space.
I've committed change
Simon Riggs <[EMAIL PROTECTED]> writes:
> On Thu, 2005-12-08 at 09:44 -0500, Tom Lane wrote:
>> Simon Riggs <[EMAIL PROTECTED]> writes:
>>> You're looking at the number of spins to acquire each lock?
>>
>> Number of semop waits.
>
> I wonder whether that is the thing to measure. That measure doesn
On Thu, 2005-12-08 at 09:44 -0500, Tom Lane wrote:
> Simon Riggs <[EMAIL PROTECTED]> writes:
> > The output you gave wasn't anything I recognize in the code. Assuming
> > its not already there, please can you share code you are using to find
> > the evidence, even if its just privately in some form
On Thu, 2005-12-08 at 10:26 -0500, Tom Lane wrote:
> Simon Riggs <[EMAIL PROTECTED]> writes:
> > The imbalance across partitions would be a major issue because of the
> > difficulty of selecting a well-distributed partitioning key. If you use
> > the LOCKTAG, then operations on the heaviest hit tab
Greg Stark <[EMAIL PROTECTED]> writes:
> Hm, so hypothetically an insert or update on a table with 4 indexes which have
> been split into 4 partitions would need to touch each partition?
That would be the best case, actually, that each heavily-used lock ends
up in a different partition. As Simon
Tom Lane <[EMAIL PROTECTED]> writes:
> That's a fair point, and reinforces my instinct that having a large
> number of partitions would be a losing game. But you are mistaken to
> think that the number of hot-spot tables is the only limit on the number
> of usable partitions. It's the number of
Simon Riggs <[EMAIL PROTECTED]> writes:
> The imbalance across partitions would be a major issue because of the
> difficulty of selecting a well-distributed partitioning key. If you use
> the LOCKTAG, then operations on the heaviest hit tables would go to the
> same partitions continually for LockR
Simon Riggs <[EMAIL PROTECTED]> writes:
> The output you gave wasn't anything I recognize in the code. Assuming
> its not already there, please can you share code you are using to find
> the evidence, even if its just privately in some form?
See below. Also, the message I previously mentioned sho
On Wed, 2005-12-07 at 22:53 -0500, Tom Lane wrote:
> Simon Riggs <[EMAIL PROTECTED]> writes:
> > My view would be that the LockMgrLock is not relevant for all workloads,
> > but I want even more to be able to discuss whether it is, or is not, on
> > an accepted basis before discussions begin.
>
>
On Wed, 2005-12-07 at 22:46 -0500, Tom Lane wrote:
> Alvaro Herrera <[EMAIL PROTECTED]> writes:
> > Is hashtable overhead all that large? Each table could be made
> > initially size-of-current-table/N entries. One problem is that
> > currently the memory freed from a hashtable is not put back int
Simon Riggs <[EMAIL PROTECTED]> writes:
> My view would be that the LockMgrLock is not relevant for all workloads,
> but I want even more to be able to discuss whether it is, or is not, on
> an accepted basis before discussions begin.
Certainly. I showed the evidence that it is currently a signif
Alvaro Herrera <[EMAIL PROTECTED]> writes:
> Is hashtable overhead all that large? Each table could be made
> initially size-of-current-table/N entries. One problem is that
> currently the memory freed from a hashtable is not put back into shmem
> freespace, is it?
Yeah; the problem is mainly th
Tom Lane wrote:
Interesting proposal.
> LockReleaseAll is also interesting from a performance point of view,
> since it executes at every transaction exit. If we divide PGPROC's
> PROCLOCK list into N lists then it will be very easy for LockReleaseAll
> to take only the partition locks it actual
On Wed, 2005-12-07 at 16:59 -0500, Tom Lane wrote:
> I've now seen actual evidence of that in
> profiling pgbench: using a modified backend that counts LWLock-related
> wait operations,
> So it seems it's time to start thinking about how to reduce contention
> for the LockMgrLock
You're right to
Tom,
This would also explain some things we've seen during benchmarking here
at EnterpriseDB. I like your idea and, as I'm on my way out, will
think about it a bit tonight.
Similarly, I don't see the any forward-looking reason for keeping the
separate hash tables used for the LockMethodIds. Or,
15 matches
Mail list logo