On Thu, Sep 01, 2016 at 01:51:34PM +0200, Peter Zijlstra wrote:
> On Thu, Sep 01, 2016 at 01:04:26PM +0200, Manfred Spraul wrote:
>
> > >So for both power and arm64, you can in fact model spin_unlock_wait()
> > >as LOCK+UNLOCK.
>
> > Is this consensus?
>
> Dunno, but it was done to fix your earl
On Thu, Sep 01, 2016 at 01:04:26PM +0200, Manfred Spraul wrote:
> >So for both power and arm64, you can in fact model spin_unlock_wait()
> >as LOCK+UNLOCK.
> Is this consensus?
Dunno, but it was done to fix your earlier locking scheme and both
architectures where it matters have done so.
So I s
On Thu, Sep 01, 2016 at 01:04:26PM +0200, Manfred Spraul wrote:
> If I understand it right, the rules are:
> 1. spin_unlock_wait() must behave like spin_lock();spin_unlock();
> 2. spin_is_locked() must behave like spin_trylock() ? spin_unlock(),TRUE :
> FALSE
I don't think spin_is_locked is as str
Hi,
On 09/01/2016 10:44 AM, Peter Zijlstra wrote:
On Wed, Aug 31, 2016 at 08:32:18PM +0200, Manfred Spraul wrote:
On 08/31/2016 06:40 PM, Will Deacon wrote:
The litmus test then looks a bit like:
CPUm:
LOCK(x)
smp_mb();
RyAcq=0
CPUn:
Wy=1
smp_mb();
UNLOCK_WAIT(x)
Correct.
which I think
On Wed, Aug 31, 2016 at 08:32:18PM +0200, Manfred Spraul wrote:
> On 08/31/2016 06:40 PM, Will Deacon wrote:
> >The litmus test then looks a bit like:
> >
> >CPUm:
> >
> >LOCK(x)
> >smp_mb();
> >RyAcq=0
> >
> >
> >CPUn:
> >
> >Wy=1
> >smp_mb();
> >UNLOCK_WAIT(x)
> Correct.
> >
> >which I think can
On 08/31/2016 06:40 PM, Will Deacon wrote:
I'm struggling with this example. We have these locks:
&sem->lock
&sma->sem_base[0...sma->sem_nsems].lock
&sma->sem_perm.lock
a condition variable:
sma->complex_mode
and a new barrier:
smp_mb__after_spin_lock()
For simplicity, we ca
On Wed, Aug 31, 2016 at 05:40:49PM +0200, Peter Zijlstra wrote:
> On Wed, Aug 31, 2016 at 06:59:07AM +0200, Manfred Spraul wrote:
>
> > The barrier must ensure that taking the spinlock (as observed by another cpu
> > with spin_unlock_wait()) and a following read are ordered.
> >
> > start conditi
On Wed, Aug 31, 2016 at 06:59:07AM +0200, Manfred Spraul wrote:
> The barrier must ensure that taking the spinlock (as observed by another cpu
> with spin_unlock_wait()) and a following read are ordered.
>
> start condition: sma->complex_mode = false;
>
> CPU 1:
> spin_lock(&sem->lock); /* s
On 08/29/2016 03:44 PM, Peter Zijlstra wrote:
If you add a barrier, the Changelog had better be clear. And I'm still
not entirely sure I get what exactly this barrier should do, nor why it
defaults to a full smp_mb. If what I suspect it should do, only PPC and
ARM64 need the barrier.
The barrier
On Mon, Aug 29, 2016 at 02:54:54PM +0200, Manfred Spraul wrote:
> Hi Peter,
>
> On 08/29/2016 12:48 PM, Peter Zijlstra wrote:
> >On Sun, Aug 28, 2016 at 01:56:13PM +0200, Manfred Spraul wrote:
> >>Right now, the spinlock machinery tries to guarantee barriers even for
> >>unorthodox locking cases,
Hi Peter,
On 08/29/2016 12:48 PM, Peter Zijlstra wrote:
On Sun, Aug 28, 2016 at 01:56:13PM +0200, Manfred Spraul wrote:
Right now, the spinlock machinery tries to guarantee barriers even for
unorthodox locking cases, which ends up as a constant stream of updates
as the architectures try to supp
On Sun, Aug 28, 2016 at 01:56:13PM +0200, Manfred Spraul wrote:
> Right now, the spinlock machinery tries to guarantee barriers even for
> unorthodox locking cases, which ends up as a constant stream of updates
> as the architectures try to support new unorthodox ideas.
>
> The patch proposes to r
Right now, the spinlock machinery tries to guarantee barriers even for
unorthodox locking cases, which ends up as a constant stream of updates
as the architectures try to support new unorthodox ideas.
The patch proposes to reverse that:
spin_lock is ACQUIRE, spin_unlock is RELEASE.
spin_unlock_wai
13 matches
Mail list logo