d, caller);
> - WARN_ON_ONCE(!spin_is_locked(&devinfo->lock));
> + lockdep_assert_held(&devinfo->lock);
This change isn't equivalent.
lockdep_assert_held() will continue to emit warnings; ie., there is no
"once" functionality. Same for the other changes below.
Reg
On 02/23/2014 06:50 PM, Paul E. McKenney wrote:
On Sun, Feb 23, 2014 at 03:35:31PM -0500, Peter Hurley wrote:
Hi Paul,
On 02/23/2014 11:37 AM, Paul E. McKenney wrote:
commit aba6b0e82c9de53eb032844f1932599f148ff68d
Author: Paul E. McKenney
Date: Sun Feb 23 08:34:24 2014 -0800
Hi James,
On 02/23/2014 03:05 PM, James Bottomley wrote:
On Sat, 2014-02-22 at 14:03 -0500, Peter Hurley wrote:
If it is necessary for a RELEASE-ACQUIRE pair to produce a full barrier, the
ACQUIRE can be followed by an smp_mb__after_unlock_lock() invocation. This
will produce a full barrier
clarifies what happens when
the CPU chooses to execute a later lock acquisition before a prior
lock release, in particular, why deadlock is avoided.
Reported-by: Peter Hurley
Reported-by: James Bottomley
Reported-by: Stefan Richter
Signed-off-by: Paul E. McKenney
diff
On 02/22/2014 01:52 PM, James Bottomley wrote:
On Sat, 2014-02-22 at 13:48 -0500, Peter Hurley wrote:
On 02/22/2014 01:43 PM, James Bottomley wrote:
On Fri, 2014-02-21 at 18:01 -0500, Peter Hurley wrote:
On 02/21/2014 11:57 AM, Tejun Heo wrote:
Yo,
On Fri, Feb 21, 2014 at 11:53:46AM -0500
On 02/22/2014 01:43 PM, James Bottomley wrote:
On Fri, 2014-02-21 at 18:01 -0500, Peter Hurley wrote:
On 02/21/2014 11:57 AM, Tejun Heo wrote:
Yo,
On Fri, Feb 21, 2014 at 11:53:46AM -0500, Peter Hurley wrote:
Ok, I can do that. But AFAIK it'll have to be an smp_rmb(); there is
no mb__
On 02/22/2014 09:38 AM, Tejun Heo wrote:
Hey,
On Fri, Feb 21, 2014 at 06:46:24PM -0500, Peter Hurley wrote:
It's a long story but the short version is that
Documentation/memory-barriers.txt recently was overhauled to reflect
what cpus actually do and what the different archs actually
de
On 02/21/2014 06:18 PM, Tejun Heo wrote:
On Fri, Feb 21, 2014 at 06:01:29PM -0500, Peter Hurley wrote:
smp_mb__after_unlock_lock() is only for ordering memory operations
between two spin-locked sections on either the same lock or by
the same task/cpu. Like:
i = 1
spin_unlock(lock1
On 02/21/2014 11:57 AM, Tejun Heo wrote:
Yo,
On Fri, Feb 21, 2014 at 11:53:46AM -0500, Peter Hurley wrote:
Ok, I can do that. But AFAIK it'll have to be an smp_rmb(); there is
no mb__after unlock.
We do have smp_mb__after_unlock_lock().
[ After thinking about it some, I don
Hi Tejun,
On 02/21/2014 08:06 AM, Tejun Heo wrote:
Hello,
On Fri, Feb 21, 2014 at 07:51:48AM -0500, Peter Hurley wrote:
I think the vast majority of kernel code which uses the workqueue
assumes there is a memory ordering guarantee.
Not really. Workqueues haven't even guarantee
On 02/21/2014 05:03 AM, Tejun Heo wrote:
On Fri, Feb 21, 2014 at 12:13:16AM -0500, Peter Hurley wrote:
CPU 0| CPU 1
|
INIT_WORK(fw_device_workfn) |
|
workfn = funcA
On 02/20/2014 09:13 PM, Tejun Heo wrote:
On Thu, Feb 20, 2014 at 09:07:27PM -0500, Peter Hurley wrote:
On 02/20/2014 08:59 PM, Tejun Heo wrote:
Hello,
On Thu, Feb 20, 2014 at 08:44:46PM -0500, Peter Hurley wrote:
+static void fw_device_workfn(struct work_struct *work)
+{
+ struct
unction never runs.
But this exposes a more general problem that I believe workqueue should
prevent; speculated loads and stores in the work item function should be
prevented from occurring before clearing PENDING in
set_work_pool_and_clear_pending().
IOW, the beginning of the work function sh
On 02/20/2014 08:59 PM, Tejun Heo wrote:
Hello,
On Thu, Feb 20, 2014 at 08:44:46PM -0500, Peter Hurley wrote:
+static void fw_device_workfn(struct work_struct *work)
+{
+ struct fw_device *device = container_of(to_delayed_work(work),
+ struct
14 matches
Mail list logo