On Tue, May 31, 2011 at 9:22 AM, Simon Riggs wrote:
> The basis for this is that weak locks do not conflict with each other,
> whereas strong locks conflict with both strong and weak locks.
> (There's a couple of special cases which I ignore for now).
> (Using Robert's description of strong/weak l
On Wed, May 25, 2011 at 3:35 PM, Tom Lane wrote:
> Simon Riggs writes:
>> On Wed, May 25, 2011 at 1:44 PM, Robert Haas wrote:
>>> On Wed, May 25, 2011 at 8:27 AM, Simon Riggs wrote:
Design seemed relatively easy from there: put local lock table in
shared memory for all procs. We then
On Fri, May 27, 2011 at 04:55:07PM -0400, Robert Haas wrote:
> When a strong lock is taken or released, we have to increment or
> decrement strong_lock_counts[fasthashpartition]. Here's the question:
> is that atomic? In other words, suppose that strong_lock_counts[42]
> starts out at 0, and two
Robert Haas writes:
> When a strong lock is taken or released, we have to increment or
> decrement strong_lock_counts[fasthashpartition]. Here's the question:
> is that atomic? In other words, suppose that strong_lock_counts[42]
> starts out at 0, and two backends both do ++strong_lock_counts[42
On Tue, May 24, 2011 at 10:03 AM, Noah Misch wrote:
> On Tue, May 24, 2011 at 08:53:11AM -0400, Robert Haas wrote:
>> On Tue, May 24, 2011 at 5:07 AM, Noah Misch wrote:
>> > This drops the part about only transferring fast-path entries once when a
>> > strong_lock_counts cell transitions from zer
Simon Riggs wrote:
> On Tue, May 24, 2011 at 6:37 PM, Robert Haas wrote:
>
> >> That being said, it's a slight extra cost for all fast-path lockers to
> >> benefit
> >> the strong lockers, so I'm not prepared to guess whether it will pay off.
> >
> > Yeah. ?Basically this entire idea is about tr
On Wed, May 25, 2011 at 8:56 AM, Simon Riggs wrote:
> On Wed, May 25, 2011 at 1:44 PM, Robert Haas wrote:
>> On Wed, May 25, 2011 at 8:27 AM, Simon Riggs wrote:
>>> I got a bit lost with the description of a potential solution. It
>>> seemed like you were unaware that there is a local lock and a
Simon Riggs writes:
> On Wed, May 25, 2011 at 1:44 PM, Robert Haas wrote:
>> On Wed, May 25, 2011 at 8:27 AM, Simon Riggs wrote:
>>> Design seemed relatively easy from there: put local lock table in
>>> shared memory for all procs. We then have a use_strong_lock at proc
>>> and at transaction le
On Wed, May 25, 2011 at 1:44 PM, Robert Haas wrote:
> On Wed, May 25, 2011 at 8:27 AM, Simon Riggs wrote:
>> I got a bit lost with the description of a potential solution. It
>> seemed like you were unaware that there is a local lock and a shared
>> lock table, maybe just me?
>
> No, I'm not unaw
On Wed, May 25, 2011 at 8:27 AM, Simon Riggs wrote:
> I got a bit lost with the description of a potential solution. It
> seemed like you were unaware that there is a local lock and a shared
> lock table, maybe just me?
No, I'm not unaware of the local lock table. The point of this
proposal is t
On Tue, May 24, 2011 at 6:37 PM, Robert Haas wrote:
>> That being said, it's a slight extra cost for all fast-path lockers to
>> benefit
>> the strong lockers, so I'm not prepared to guess whether it will pay off.
>
> Yeah. Basically this entire idea is about trying to make life easier
> for we
On Tue, May 24, 2011 at 12:34 PM, Noah Misch wrote:
> There's a potentially-unbounded delay between when the subject backend reads
> strong_lock_counts[] and when it sets its fast-path-used flag. (I didn't mean
> "not yet seen" in the sense that some memory load would not show the latest
> value.
On Tue, May 24, 2011 at 11:52:54AM -0400, Robert Haas wrote:
> On Tue, May 24, 2011 at 11:38 AM, Noah Misch wrote:
> >> Another random idea for optimization: we could have a lock-free array
> >> with one entry per backend, indicating whether any fast-path locks are
> >> present. ?Before acquiring
On Tue, May 24, 2011 at 11:38 AM, Noah Misch wrote:
>> Another random idea for optimization: we could have a lock-free array
>> with one entry per backend, indicating whether any fast-path locks are
>> present. Before acquiring its first fast-path lock, a backend writes
>> a 1 into that array and
On Tue, May 24, 2011 at 10:35:23AM -0400, Robert Haas wrote:
> On Tue, May 24, 2011 at 10:03 AM, Noah Misch wrote:
> > Let's see if I understand the risk better now: the new system will handle
> > lock
> > load better, but when it does hit a limit, understanding why that happened
> > will be more
On Tue, May 24, 2011 at 10:03 AM, Noah Misch wrote:
> Let's see if I understand the risk better now: the new system will handle lock
> load better, but when it does hit a limit, understanding why that happened
> will be more difficult. Good point. No silver-bullet ideas come to mind for
> avoidi
On Tue, May 24, 2011 at 08:53:11AM -0400, Robert Haas wrote:
> On Tue, May 24, 2011 at 5:07 AM, Noah Misch wrote:
> > This drops the part about only transferring fast-path entries once when a
> > strong_lock_counts cell transitions from zero to one.
>
> Right: that's because I don't think that's
On Tue, May 24, 2011 at 5:07 AM, Noah Misch wrote:
> This drops the part about only transferring fast-path entries once when a
> strong_lock_counts cell transitions from zero to one.
Right: that's because I don't think that's what we want to do. I
don't think we want to transfer all per-backend
On Mon, May 23, 2011 at 09:15:27PM -0400, Robert Haas wrote:
> On Fri, May 13, 2011 at 4:16 PM, Noah Misch wrote:
> > ? ? ? ?if (level >= ShareUpdateExclusiveLock)
> > ? ? ? ? ? ? ? ?++strong_lock_counts[my_strong_lock_count_partition]
> > ? ? ? ? ? ? ? ?sfence
> > ? ? ? ? ? ? ? ?if (strong_lock_c
On Fri, May 13, 2011 at 4:16 PM, Noah Misch wrote:
> if (level >= ShareUpdateExclusiveLock)
> ++strong_lock_counts[my_strong_lock_count_partition]
> sfence
> if (strong_lock_counts[my_strong_lock_count_partition] == 1)
> /*
On Sat, May 14, 2011 at 1:33 PM, Jeff Janes wrote:
> Would that risk be substantially worse than it currently is? If a
> backend goes into the tank while holding access shared locks, it will
> still block access exclusive locks until it recovers. And those
> queued access exclusive locks will bl
On Fri, May 13, 2011 at 5:55 PM, Robert Haas wrote:
> On Fri, May 13, 2011 at 4:16 PM, Noah Misch wrote:
>> I wonder if, instead, we could signal all backends at
>> marker 1 to dump the applicable parts of their local (memory) lock tables to
>> files. Or to another shared memory region, if that
Robert Haas writes:
> On Fri, May 13, 2011 at 11:05 PM, Noah Misch wrote:
>> Incidentally, I used the term "local lock" because I assumed fast-path locks
>> would still go through the lock manager far enough to populate the local lock
>> table. But there may be no reason to do so.
> Oh, good po
On Fri, May 13, 2011 at 11:05 PM, Noah Misch wrote:
> Incidentally, I used the term "local lock" because I assumed fast-path locks
> would still go through the lock manager far enough to populate the local lock
> table. But there may be no reason to do so.
Oh, good point. I think we probably WO
On Fri, May 13, 2011 at 08:55:34PM -0400, Robert Haas wrote:
> On Fri, May 13, 2011 at 4:16 PM, Noah Misch wrote:
> > If I'm understanding correctly, your pseudocode would look roughly like
> > this:
> >
> > ? ? ? ?if (level >= ShareUpdateExclusiveLock)
> I think ShareUpdateExclusiveLock should
On Fri, May 13, 2011 at 4:16 PM, Noah Misch wrote:
> The key is putting a rapid hard stop to all fast-path lock acquisitions and
> then reconstructing a valid global picture of the affected lock table regions.
> Your 1024-way table of strong lock counts sounds promising. (Offhand, I do
> think th
On Fri, May 13, 2011 at 09:07:34AM -0400, Robert Haas wrote:
> Actually, it's occurred to me from time to time that it would be nice
> to eliminate ACCESS SHARE (and while I'm dreaming, maybe ROW SHARE and
> ROW EXCLUSIVE) locks for tables as well. Under normal operating
> conditions (i.e. no DDL
27 matches
Mail list logo