On Jul18, 2011, at 04:36 , Robert Haas wrote:
> On Fri, Jul 8, 2011 at 6:02 AM, Florian Pflug wrote:
>>> I don't want to fiddle with your git repo, but if you attach a patch
>>> that applies to the master branch I'll give it a spin if I have time.
>>
>> Patch attached.
>>
>> Beware that it needs
On Fri, Jul 8, 2011 at 6:02 AM, Florian Pflug wrote:
>> I don't want to fiddle with your git repo, but if you attach a patch
>> that applies to the master branch I'll give it a spin if I have time.
>
> Patch attached.
>
> Beware that it needs at least GCC 4.1, otherwise it'll use a per-partition
>
On Jul13, 2011, at 22:04 , Robert Haas wrote:
> On Jul 12, 2011, at 8:10 PM, Florian Pflug wrote:
>> I wonder if clearing the waiters-present bit only upon clearing the
>> queue completely is necessary for correctness. Wouldn't it be OK
>> to clear the bit after waking up at least one locker? That
On Jul 12, 2011, at 8:10 PM, Florian Pflug wrote:
> On Jul13, 2011, at 00:10 , Robert Haas wrote:
>> On Jul 12, 2011, at 8:03 AM, Florian Pflug wrote:
>>> The algorithm is quite straight forward, if one assumes a lock-free
>>> implementation of a queue (More on that below)
>>
>> This is similar
On Jul13, 2011, at 00:10 , Robert Haas wrote:
> On Jul 12, 2011, at 8:03 AM, Florian Pflug wrote:
>> The algorithm is quite straight forward, if one assumes a lock-free
>> implementation of a queue (More on that below)
>
> This is similar to the CAS-based LWLocks I played around with, though
> I
On Jul 12, 2011, at 8:03 AM, Florian Pflug wrote:
> The algorithm is quite straight forward, if one assumes a lock-free
> implementation of a queue (More on that below)
This is similar to the CAS-based LWLocks I played around with, though I didn't
use a lock-free queue. I think I probably need
On Jul7, 2011, at 03:35 , Robert Haas wrote:
> Some poking around suggests that the problem isn't that
> spinlocks are routinely contended - it seems that we nearly always get
> the spinlock right off the bat. I'm wondering if the problem may be
> not so much that we have continuous spinlock conte
On Jul8, 2011, at 22:27 , Stefan Kaltenbrunner wrote:
> On 07/08/2011 04:21 PM, Tom Lane wrote:
>> Florian Pflug writes:
>>> Patch attached.
>>
>>> Beware that it needs at least GCC 4.1, otherwise it'll use a per-partition
>>> spin lock instead of "locked xadd" to increment the shared counters.
>
On 07/08/2011 04:21 PM, Tom Lane wrote:
> Florian Pflug writes:
>> Patch attached.
>
>> Beware that it needs at least GCC 4.1, otherwise it'll use a per-partition
>> spin lock instead of "locked xadd" to increment the shared counters.
>
> That's already sufficient reason to reject the patch. No
On Jul8, 2011, at 16:21 , Tom Lane wrote:
> Florian Pflug writes:
>> Patch attached.
>
>> Beware that it needs at least GCC 4.1, otherwise it'll use a per-partition
>> spin lock instead of "locked xadd" to increment the shared counters.
>
> That's already sufficient reason to reject the patch.
Florian Pflug writes:
> Patch attached.
> Beware that it needs at least GCC 4.1, otherwise it'll use a per-partition
> spin lock instead of "locked xadd" to increment the shared counters.
That's already sufficient reason to reject the patch. Not everyone
uses gcc, let alone very recent versions
On Jul7, 2011, at 18:09 , Robert Haas wrote:
> On Thu, Jul 7, 2011 at 5:54 AM, Florian Pflug wrote:
>> In effect, the resulting thing is an LWLock with a partitioned shared
>> counter. The partition one backend operates on for shared locks is
>> determined by its backend id.
>>
>> I've added the
On Thu, Jul 7, 2011 at 5:54 AM, Florian Pflug wrote:
> In effect, the resulting thing is an LWLock with a partitioned shared
> counter. The partition one backend operates on for shared locks is
> determined by its backend id.
>
> I've added the implementation to the lock benchmarking tool at
> ht
On Jul7, 2011, at 03:35 , Robert Haas wrote:
> On Thu, Jun 23, 2011 at 11:42 AM, Robert Haas wrote:
>> On Wed, Jun 22, 2011 at 5:43 PM, Florian Pflug wrote:
>>> On Jun12, 2011, at 23:39 , Robert Haas wrote:
So, the majority (60%) of the excess spinning appears to be due to
SInvalReadLoc
On Thu, Jun 23, 2011 at 11:42 AM, Robert Haas wrote:
> On Wed, Jun 22, 2011 at 5:43 PM, Florian Pflug wrote:
>> On Jun12, 2011, at 23:39 , Robert Haas wrote:
>>> So, the majority (60%) of the excess spinning appears to be due to
>>> SInvalReadLock. A good chunk are due to ProcArrayLock (25%).
>>
On Tue, Jun 28, 2011 at 8:11 PM, Florian Pflug wrote:
>> I wrote a little script to show to reorganize this data in a
>> possibly-easier-to-understand format - ordering each column from
>> lowest to highest, and showing each algorithm as a multiple of the
>> cheapest value for that column:
>
> If
On Jun28, 2011, at 22:18 , Robert Haas wrote:
> On Tue, Jun 28, 2011 at 2:33 PM, Florian Pflug wrote:
>> [ testing of various spinlock implementations ]
>
> I set T=30 and N="1 2 4 8 16 32" and tried this out on a 32-core
> loaner from Nate Boley:
Cool, thanks!
> 100 counter increments per cycl
On Tue, Jun 28, 2011 at 5:55 PM, Florian Pflug wrote:
> On Jun28, 2011, at 23:48 , Robert Haas wrote:
>> On Tue, Jun 28, 2011 at 5:33 PM, Merlin Moncure wrote:
>>> On Tue, Jun 28, 2011 at 3:18 PM, Robert Haas wrote:
user-32:
none(1.0),atomicinc(14.4),pg_lwlock_cas(22.1),cmpxchng(41.2)
On Jun28, 2011, at 23:48 , Robert Haas wrote:
> On Tue, Jun 28, 2011 at 5:33 PM, Merlin Moncure wrote:
>> On Tue, Jun 28, 2011 at 3:18 PM, Robert Haas wrote:
>>> user-32:
>>> none(1.0),atomicinc(14.4),pg_lwlock_cas(22.1),cmpxchng(41.2),pg_lwlock(588.2),spin(1264.7)
>>
>> I may not be following
On Tue, Jun 28, 2011 at 5:33 PM, Merlin Moncure wrote:
> On Tue, Jun 28, 2011 at 3:18 PM, Robert Haas wrote:
>> user-32:
>> none(1.0),atomicinc(14.4),pg_lwlock_cas(22.1),cmpxchng(41.2),pg_lwlock(588.2),spin(1264.7)
>
> I may not be following all this correctly, but doesn't this suggest a
> huge
On Tue, Jun 28, 2011 at 3:18 PM, Robert Haas wrote:
> user-32:
> none(1.0),atomicinc(14.4),pg_lwlock_cas(22.1),cmpxchng(41.2),pg_lwlock(588.2),spin(1264.7)
I may not be following all this correctly, but doesn't this suggest a
huge potential upside for the cas based patch you posted upthread when
On Tue, Jun 28, 2011 at 2:33 PM, Florian Pflug wrote:
> [ testing of various spinlock implementations ]
I set T=30 and N="1 2 4 8 16 32" and tried this out on a 32-core
loaner from Nate Boley:
100 counter increments per cycle
worker 1 2 4 8
On Jun23, 2011, at 23:40 , Robert Haas wrote:
>>> I tried rewriting the LWLocks using CAS. It actually seems to make
>>> things slightly worse on the tests I've done so far, perhaps because I
>>> didn't make it respect spins_per_delay. Perhaps fetch-and-add would
>>> be better, but I'm not holdin
On Sat, Jun 25, 2011 at 8:26 PM, Greg Stark wrote:
> On Thu, Jun 23, 2011 at 4:42 PM, Robert Haas wrote:
>> ProcArrayLock looks like a tougher nut to crack - there's simply no
>> way, with the system we have right now, that you can take a snapshot
>> without locking the list of running processes.
On Thu, Jun 23, 2011 at 4:42 PM, Robert Haas wrote:
> ProcArrayLock looks like a tougher nut to crack - there's simply no
> way, with the system we have right now, that you can take a snapshot
> without locking the list of running processes. I'm not sure what to
> do about that, but we're probabl
On Thu, Jun 23, 2011 at 5:35 PM, Florian Pflug wrote:
>> Well, I'm sure there is some effect, but my experiments seem to
>> indicate that it's not a very important one. Again, please feel free
>> to provide contrary evidence. I think the basic issue is that - in
>> the best possible case - paddi
On Jun23, 2011, at 22:15 , Robert Haas wrote:
> On Thu, Jun 23, 2011 at 2:34 PM, Florian Pflug wrote:
>> It seems hard to believe that there isn't some effect of two locks
>> sharing a cache line. There are architectures (even some of the
>> Intel architectures, I believe) where cache lines are 32
On Thu, Jun 23, 2011 at 2:34 PM, Florian Pflug wrote:
> It seems hard to believe that there isn't some effect of two locks
> sharing a cache line. There are architectures (even some of the
> Intel architectures, I believe) where cache lines are 32 bit, though -
> so measuring this certainly isn't
On Thu, Jun 23, 2011 at 1:34 PM, Florian Pflug wrote:
> On Jun23, 2011, at 17:42 , Robert Haas wrote:
>> On Wed, Jun 22, 2011 at 5:43 PM, Florian Pflug wrote:
>>> On Jun12, 2011, at 23:39 , Robert Haas wrote:
So, the majority (60%) of the excess spinning appears to be due to
SInvalReadL
On Thu, Jun 23, 2011 at 12:19 PM, Heikki Linnakangas
wrote:
> On 23.06.2011 18:42, Robert Haas wrote:
>> ProcArrayLock looks like a tougher nut to crack - there's simply no
>> way, with the system we have right now, that you can take a snapshot
>> without locking the list of running processes. I'
On Jun23, 2011, at 17:42 , Robert Haas wrote:
> On Wed, Jun 22, 2011 at 5:43 PM, Florian Pflug wrote:
>> On Jun12, 2011, at 23:39 , Robert Haas wrote:
>>> So, the majority (60%) of the excess spinning appears to be due to
>>> SInvalReadLock. A good chunk are due to ProcArrayLock (25%).
>>
>> Hm,
On 23.06.2011 18:42, Robert Haas wrote:
ProcArrayLock looks like a tougher nut to crack - there's simply no
way, with the system we have right now, that you can take a snapshot
without locking the list of running processes. I'm not sure what to
do about that, but we're probably going to have to
On Wed, Jun 22, 2011 at 5:43 PM, Florian Pflug wrote:
> On Jun12, 2011, at 23:39 , Robert Haas wrote:
>> So, the majority (60%) of the excess spinning appears to be due to
>> SInvalReadLock. A good chunk are due to ProcArrayLock (25%).
>
> Hm, sizeof(LWLock) is 24 on X86-64, making sizeof(LWLockP
33 matches
Mail list logo