30.08.2013, 07:16, "Waiman Long" :
> I am sorry that I was not clear in my previous mail. I mean a flag/macro
> for compile time checking rather than doing runtime checking.
Still a new smp_mb_xxx is better IMO since it can be used in other places too
if a need arises.
--
To unsubscribe from this
On 08/29/2013 01:03 PM, Alexander Fyodorov wrote:
29.08.2013, 19:25, "Waiman Long":
What I have been thinking is to set a flag in an architecture specific
header file to tell if the architecture need a memory barrier. The
generic code will then either do a smp_mb() or barrier() depending on
the
29.08.2013, 19:25, "Waiman Long" :
> What I have been thinking is to set a flag in an architecture specific
> header file to tell if the architecture need a memory barrier. The
> generic code will then either do a smp_mb() or barrier() depending on
> the presence or absence of the flag. I would pre
On 08/27/2013 08:09 AM, Alexander Fyodorov wrote:
I also thought that the x86 spinlock unlock path was an atomic add. It
just comes to my realization recently that this is not the case. The
UNLOCK_LOCK_PREFIX will be mapped to "" except for some old 32-bit x86
processors.
Hmm, I didn't know that
On Wed, Aug 28, 2013 at 09:15:30AM -0400, Steven Rostedt wrote:
> Are you for the smp_mb__after_spin_unlock() ?
Sure, if the performance gains of having it out-weight the added
complexity of having it :-)
Nice non answer methinks, no? Muwhaha. To me having one more or less
'conditional' full mb p
On Wed, 28 Aug 2013 15:05:29 +0200
Peter Zijlstra wrote:
> On Wed, Aug 28, 2013 at 08:59:57AM -0400, Steven Rostedt wrote:
> > On Wed, 28 Aug 2013 10:19:37 +0200
> > Peter Zijlstra wrote:
>
> > Spin locks only prevent leaks out of the critical section. It does not
> > guarantee leaks into the
On Wed, Aug 28, 2013 at 08:59:57AM -0400, Steven Rostedt wrote:
> On Wed, 28 Aug 2013 10:19:37 +0200
> Peter Zijlstra wrote:
>
>
> > > An unlock followed by a lock needs to act like a full barrier, but there
> > > is no requirement that a lock or unlock taken separately act like a
> > > full bar
On Wed, 28 Aug 2013 10:19:37 +0200
Peter Zijlstra wrote:
> > An unlock followed by a lock needs to act like a full barrier, but there
> > is no requirement that a lock or unlock taken separately act like a
> > full barrier.
>
> But that is already a property of the acquisition/release barrier.
On Tue, Aug 27, 2013 at 06:21:29PM -0700, Paul E. McKenney wrote:
> On Tue, Aug 27, 2013 at 03:53:10PM +0200, Peter Zijlstra wrote:
> > On Tue, Aug 27, 2013 at 09:14:36AM -0400, Steven Rostedt wrote:
> >
> > > I just had this conversation with Paul McKenney. Should there be a
> > > smp_mb_after_sp
On Tue, Aug 27, 2013 at 03:53:10PM +0200, Peter Zijlstra wrote:
> On Tue, Aug 27, 2013 at 09:14:36AM -0400, Steven Rostedt wrote:
>
> > I just had this conversation with Paul McKenney. Should there be a
> > smp_mb_after_spin_unlock()?
>
> Depends on the benefits I suppose :-) Oleg and Linus did r
On Tue, Aug 27, 2013 at 09:14:36AM -0400, Steven Rostedt wrote:
> I just had this conversation with Paul McKenney. Should there be a
> smp_mb_after_spin_unlock()?
Depends on the benefits I suppose :-) Oleg and Linus did recently add
smp_mb__before_spinlock();
> Although we blew it off as adding
> I also thought that the x86 spinlock unlock path was an atomic add. It
> just comes to my realization recently that this is not the case. The
> UNLOCK_LOCK_PREFIX will be mapped to "" except for some old 32-bit x86
> processors.
Hmm, I didn't know that. Looking through Google found these rules f
On 08/22/2013 09:28 AM, Alexander Fyodorov wrote:
22.08.2013, 05:04, "Waiman Long":
On 08/21/2013 11:51 AM, Alexander Fyodorov wrote:
In this case, we should have smp_wmb() before freeing the lock. The
question is whether we need to do a full mb() instead. The x86 ticket
spinlock unlock code is
22.08.2013, 05:04, "Waiman Long" :
> On 08/21/2013 11:51 AM, Alexander Fyodorov wrote:
> In this case, we should have smp_wmb() before freeing the lock. The
> question is whether we need to do a full mb() instead. The x86 ticket
> spinlock unlock code is just a regular add instruction except for so
On 08/21/2013 11:51 AM, Alexander Fyodorov wrote:
21.08.2013, 07:01, "Waiman Long":
On 08/20/2013 11:31 AM, Alexander Fyodorov wrote:
Isn't a race possible if another thread acquires the spinlock in the window
between setting lock->locked to 0 and issuing smp_wmb()? Writes from
the critical sec
21.08.2013, 07:01, "Waiman Long" :
> On 08/20/2013 11:31 AM, Alexander Fyodorov wrote:
>> Isn't a race possible if another thread acquires the spinlock in the window
>> between setting lock->locked to 0 and issuing smp_wmb()? Writes from
>> the critical section from our thread might be delayed beh
On 08/20/2013 11:31 AM, Alexander Fyodorov wrote:
Hi Waiman,
I'm not subscribed to the lkml so I'm writing to you instead. In your patch you
put smp_wmb() at the end of spin_unlock():
+static __always_inline void queue_spin_unlock(struct qspinlock *lock)
+{
+ /*
+* This unlock fu
This patch introduces a new queue spinlock implementation that can
serve as an alternative to the default ticket spinlock. Compared with
the ticket spinlock, this queue spinlock should be almost as fair as
the ticket spinlock. It has about the same speed in single-thread and
it can be much faster i
18 matches
Mail list logo