Re: [PATCH v2 3/9] rcu/sync: Remove custom check for reader-section

2019-07-15 Thread Oleg Nesterov
On 07/12, Joel Fernandes wrote:
>
>  static inline bool rcu_sync_is_idle(struct rcu_sync *rsp)
>  {
> - RCU_LOCKDEP_WARN(!rcu_read_lock_held() &&
> -  !rcu_read_lock_bh_held() &&
> -  !rcu_read_lock_sched_held(),
> + RCU_LOCKDEP_WARN(!rcu_read_lock_any_held(),

Yes, this is what I meant.

Sorry for confusion, I should have mentioned that rcu_sync_is_idle()
was recently updated when I suggested to use the new helper.

Acked-by: Oleg Nesterov 



[PATCH v2 7/8] docs: load_config.py: avoid needing a conf.py just due to LaTeX docs

2019-07-15 Thread Mauro Carvalho Chehab
Right now, for every directory that we need to have LaTeX output,
a conf.py file is required.

That causes an extra overhead and it is actually a hack, as
the latex_documents line there are usually a copy of the ones
that are there already at the main conf.py.

So, instead, re-use the global latex_documents var, just
adjusting the path to be relative ones.

Signed-off-by: Mauro Carvalho Chehab 

---

v2: make SPHINXDIRS="foo" htmldocs now works without needing a per-subdir
conf.py.

diff --git a/Documentation/sphinx/load_config.py 
b/Documentation/sphinx/load_config.py
index 301a21aa4f63..e4a04f367b41 100644
--- a/Documentation/sphinx/load_config.py
+++ b/Documentation/sphinx/load_config.py
@@ -21,6 +21,29 @@ def loadConfig(namespace):
 and os.path.normpath(namespace["__file__"]) != 
os.path.normpath(config_file) ):
 config_file = os.path.abspath(config_file)
 
+# Let's avoid one conf.py file just due to latex_documents
+start = config_file.find('Documentation/')
+   if start >= 0:
+   start = config_file.find('/', start + 1)
+
+end = config_file.rfind('/')
+if start >= 0 and end > 0:
+dir = config_file[start + 1:end]
+
+print("source directory: %s" % dir)
+new_latex_docs = []
+latex_documents = namespace['latex_documents']
+
+for l in latex_documents:
+if l[0].find(dir) == 0:
+has = True
+fn = l[0][len(dir) + 1:]
+new_latex_docs.append((fn, l[1], l[2], l[3], l[4]))
+break
+
+namespace['latex_documents'] = new_latex_docs
+
+# If there is an extra conf.py file, load it
 if os.path.isfile(config_file):
 sys.stdout.write("load additional sphinx-config: %s\n" % 
config_file)
 config = namespace.copy()
@@ -29,4 +52,6 @@ def loadConfig(namespace):
 del config['__file__']
 namespace.update(config)
 else:
-sys.stderr.write("WARNING: additional sphinx-config not found: 
%s\n" % config_file)
+config = namespace.copy()
+config['tags'].add("subproject")
+namespace.update(config)


Re: [PATCH] tracing/fgraph: support recording function return values

2019-07-15 Thread Will Deacon
On Sat, Jul 13, 2019 at 08:10:26PM +0800, Changbin Du wrote:
> This patch adds a new trace option 'funcgraph-retval' and is disabled by
> default. When this option is enabled, fgraph tracer will show the return
> value of each function. This is useful to find/analyze a original error
> source in a call graph.
> 
> One limitation is that the kernel doesn't know the prototype of functions.
> So fgraph assumes all functions have a retvalue of type int. You must ignore
> the value of *void* function. And if the retvalue looks like an error code
> then both hexadecimal and decimal number are displayed.

This seems like quite a significant drawback and I think it could be pretty
confusing if you have to filter out bogus return values from the trace.

For example, in your snippet:

>  3)   |  kvm_vm_ioctl() {
>  3)   |mutex_lock() {
>  3)   |  _cond_resched() {
>  3)   0.234 us|rcu_all_qs(); /* ret=0x8000 */
>  3)   0.704 us|  } /* ret=0x0 */
>  3)   1.226 us|} /* ret=0x0 */
>  3)   0.247 us|mutex_unlock(); /* ret=0x8880738ed040 */

mutex_unlock() is wrongly listed as returning something.

How much of this could be achieved from userspace by placing kretprobes on
non-void functions instead?

Will


Re: [PATCH v8 0/2] fTPM: firmware TPM running in TEE

2019-07-15 Thread Ilias Apalodimas
On Fri, Jul 12, 2019 at 06:37:58AM +0300, Jarkko Sakkinen wrote:
> On Thu, Jul 11, 2019 at 11:10:59PM +0300, Ilias Apalodimas wrote:
> > Will report back any issues when we start using it on real hardware
> > rather than QEMU
> > 
> > Thanks
> > /Ilias
> 
> That would awesome. PR is far away so there is time to add more
> tested-by's. Thanks.
> 

I tested the basic fucntionality on QEMU and with the code only built as a
module. You can add my tested-by on this if you want

> /Jarkko


[PATCH] docs: fix broken doc links due to renames

2019-07-15 Thread Mauro Carvalho Chehab
Some files got renamed but the patch was incomplete, as it forgot
to update the documentation reference accordingly.

Signed-off-by: Mauro Carvalho Chehab 
---

This patch is against current linus/master branch.

 Documentation/RCU/rculist_nulls.txt   | 2 +-
 Documentation/devicetree/bindings/arm/idle-states.txt | 2 +-
 Documentation/locking/spinlocks.txt   | 4 ++--
 Documentation/memory-barriers.txt | 2 +-
 Documentation/translations/ko_KR/memory-barriers.txt  | 2 +-
 MAINTAINERS   | 6 +++---
 drivers/scsi/hpsa.c   | 4 ++--
 7 files changed, 11 insertions(+), 11 deletions(-)

diff --git a/Documentation/RCU/rculist_nulls.txt 
b/Documentation/RCU/rculist_nulls.txt
index 8151f0195f76..23f115dc87cf 100644
--- a/Documentation/RCU/rculist_nulls.txt
+++ b/Documentation/RCU/rculist_nulls.txt
@@ -1,7 +1,7 @@
 Using hlist_nulls to protect read-mostly linked lists and
 objects using SLAB_TYPESAFE_BY_RCU allocations.
 
-Please read the basics in Documentation/RCU/listRCU.txt
+Please read the basics in Documentation/RCU/listRCU.rst
 
 Using special makers (called 'nulls') is a convenient way
 to solve following problem :
diff --git a/Documentation/devicetree/bindings/arm/idle-states.txt 
b/Documentation/devicetree/bindings/arm/idle-states.txt
index 326f29b270ad..2d325bed37e5 100644
--- a/Documentation/devicetree/bindings/arm/idle-states.txt
+++ b/Documentation/devicetree/bindings/arm/idle-states.txt
@@ -703,4 +703,4 @@ cpus {
 https://www.devicetree.org/specifications/
 
 [6] ARM Linux Kernel documentation - Booting AArch64 Linux
-Documentation/arm64/booting.txt
+Documentation/arm64/booting.rst
diff --git a/Documentation/locking/spinlocks.txt 
b/Documentation/locking/spinlocks.txt
index ff35e40bdf5b..430b641ae072 100644
--- a/Documentation/locking/spinlocks.txt
+++ b/Documentation/locking/spinlocks.txt
@@ -74,7 +74,7 @@ itself.  The read lock allows many concurrent readers.  
Anything that
 _changes_ the list will have to get the write lock.
 
NOTE! RCU is better for list traversal, but requires careful
-   attention to design detail (see Documentation/RCU/listRCU.txt).
+   attention to design detail (see Documentation/RCU/listRCU.rst).
 
 Also, you cannot "upgrade" a read-lock to a write-lock, so if you at _any_
 time need to do any changes (even if you don't do it every time), you have
@@ -82,7 +82,7 @@ to get the write-lock at the very beginning.
 
NOTE! We are working hard to remove reader-writer spinlocks in most
cases, so please don't add a new one without consensus.  (Instead, see
-   Documentation/RCU/rcu.txt for complete information.)
+   Documentation/RCU/rcu.rst for complete information.)
 
 
 
diff --git a/Documentation/memory-barriers.txt 
b/Documentation/memory-barriers.txt
index 045bb8148fe9..1adbb8a371c7 100644
--- a/Documentation/memory-barriers.txt
+++ b/Documentation/memory-barriers.txt
@@ -548,7 +548,7 @@ There are certain things that the Linux kernel memory 
barriers do not guarantee:
 
[*] For information on bus mastering DMA and coherency please read:
 
-   Documentation/PCI/pci.rst
+   Documentation/driver-api/pci/pci.rst
Documentation/DMA-API-HOWTO.txt
Documentation/DMA-API.txt
 
diff --git a/Documentation/translations/ko_KR/memory-barriers.txt 
b/Documentation/translations/ko_KR/memory-barriers.txt
index a33c2a536542..2774624ee843 100644
--- a/Documentation/translations/ko_KR/memory-barriers.txt
+++ b/Documentation/translations/ko_KR/memory-barriers.txt
@@ -569,7 +569,7 @@ ACQUIRE 는 해당 오퍼레이션의 로드 부분에만 적용되고 RELEASE 
 
[*] 버스 마스터링 DMA 와 일관성에 대해서는 다음을 참고하시기 바랍니다:
 
-   Documentation/PCI/pci.rst
+   Documentation/driver-api/pci/pci.rst
Documentation/DMA-API-HOWTO.txt
Documentation/DMA-API.txt
 
diff --git a/MAINTAINERS b/MAINTAINERS
index f5533d1bda2e..51ad84a3f4e3 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -899,7 +899,7 @@ L:  linux-...@vger.kernel.org
 W: http://ez.analog.com/community/linux-device-drivers
 S: Supported
 F: drivers/iio/adc/ad7124.c
-F: Documentation/devicetree/bindings/iio/adc/adi,ad7124.txt
+F: Documentation/devicetree/bindings/iio/adc/adi,ad7124.yaml
 
 ANALOG DEVICES INC AD7606 DRIVER
 M: Stefan Popa 
@@ -6828,7 +6828,7 @@ R:Sagi Shahar 
 R: Jon Olson 
 L: net...@vger.kernel.org
 S: Supported
-F: Documentation/networking/device_drivers/google/gve.txt
+F: Documentation/networking/device_drivers/google/gve.rst
 F: drivers/net/ethernet/google
 
 GPD POCKET FAN DRIVER
@@ -12077,7 +12077,7 @@ M:  Juergen Gross 
 M: Alok Kataria 
 L: virtualizat...@lists.linux-foundation.org
 S: Supported
-F: Documentation/virtual/paravirt_ops.txt
+F: Documentation/virtual/paravirt_ops.rst
 F: arch/*/kernel/paravirt*
 F: arch/*/include/asm/paravirt*.h
 F: includ

Re: [PATCH] tracing/fgraph: support recording function return values

2019-07-15 Thread Peter Zijlstra
On Mon, Jul 15, 2019 at 09:29:30AM +0100, Will Deacon wrote:
> On Sat, Jul 13, 2019 at 08:10:26PM +0800, Changbin Du wrote:
> > This patch adds a new trace option 'funcgraph-retval' and is disabled by
> > default. When this option is enabled, fgraph tracer will show the return
> > value of each function. This is useful to find/analyze a original error
> > source in a call graph.
> > 
> > One limitation is that the kernel doesn't know the prototype of functions.
> > So fgraph assumes all functions have a retvalue of type int. You must ignore
> > the value of *void* function. And if the retvalue looks like an error code
> > then both hexadecimal and decimal number are displayed.
> 
> This seems like quite a significant drawback and I think it could be pretty
> confusing if you have to filter out bogus return values from the trace.
> 
> For example, in your snippet:
> 
> >  3)   |  kvm_vm_ioctl() {
> >  3)   |mutex_lock() {
> >  3)   |  _cond_resched() {
> >  3)   0.234 us|rcu_all_qs(); /* ret=0x8000 */
> >  3)   0.704 us|  } /* ret=0x0 */
> >  3)   1.226 us|} /* ret=0x0 */
> >  3)   0.247 us|mutex_unlock(); /* ret=0x8880738ed040 */
> 
> mutex_unlock() is wrongly listed as returning something.
> 
> How much of this could be achieved from userspace by placing kretprobes on
> non-void functions instead?

Alternatively, we can have recordmcount (or objtool) mark all functions
with a return value when the build has DEBUG_INFO on. The dwarves know
the function signature.



Kindly Respond

2019-07-15 Thread Donald Douglas
Hello,
I am Barr Fredrick Mbogo a business consultant i have a lucrative
business to discuss with you from the Eastern part of Africa Uganda to
be precise aimed at agreed percentage upon your acceptance of my hand
in business and friendship. Kindly respond to me if you are interested
to partner with me for an update. Very important.

Yours Sincerely,
Donald Douglas,
For,
Barr Frederick Mbogo
Legal Consultant.
Reply to: barrfredmb...@consultant.com


[PATCH AUTOSEL 5.2 117/249] x86/atomic: Fix smp_mb__{before,after}_atomic()

2019-07-15 Thread Sasha Levin
From: Peter Zijlstra 

[ Upstream commit 69d927bba39517d0980462efc051875b7f4db185 ]

Recent probing at the Linux Kernel Memory Model uncovered a
'surprise'. Strongly ordered architectures where the atomic RmW
primitive implies full memory ordering and
smp_mb__{before,after}_atomic() are a simple barrier() (such as x86)
fail for:

*x = 1;
atomic_inc(u);
smp_mb__after_atomic();
r0 = *y;

Because, while the atomic_inc() implies memory order, it
(surprisingly) does not provide a compiler barrier. This then allows
the compiler to re-order like so:

atomic_inc(u);
*x = 1;
smp_mb__after_atomic();
r0 = *y;

Which the CPU is then allowed to re-order (under TSO rules) like:

atomic_inc(u);
r0 = *y;
*x = 1;

And this very much was not intended. Therefore strengthen the atomic
RmW ops to include a compiler barrier.

NOTE: atomic_{or,and,xor} and the bitops already had the compiler
barrier.

Signed-off-by: Peter Zijlstra (Intel) 
Cc: Linus Torvalds 
Cc: Peter Zijlstra 
Cc: Thomas Gleixner 
Signed-off-by: Ingo Molnar 
Signed-off-by: Sasha Levin 
---
 Documentation/atomic_t.txt | 3 +++
 arch/x86/include/asm/atomic.h  | 8 
 arch/x86/include/asm/atomic64_64.h | 8 
 arch/x86/include/asm/barrier.h | 4 ++--
 4 files changed, 13 insertions(+), 10 deletions(-)

diff --git a/Documentation/atomic_t.txt b/Documentation/atomic_t.txt
index dca3fb0554db..65bb09a29324 100644
--- a/Documentation/atomic_t.txt
+++ b/Documentation/atomic_t.txt
@@ -194,6 +194,9 @@ These helper barriers exist because architectures have 
varying implicit
 ordering on their SMP atomic primitives. For example our TSO architectures
 provide full ordered atomics and these barriers are no-ops.
 
+NOTE: when the atomic RmW ops are fully ordered, they should also imply a
+compiler barrier.
+
 Thus:
 
   atomic_fetch_add();
diff --git a/arch/x86/include/asm/atomic.h b/arch/x86/include/asm/atomic.h
index ea3d95275b43..115127c7ad28 100644
--- a/arch/x86/include/asm/atomic.h
+++ b/arch/x86/include/asm/atomic.h
@@ -54,7 +54,7 @@ static __always_inline void arch_atomic_add(int i, atomic_t 
*v)
 {
asm volatile(LOCK_PREFIX "addl %1,%0"
 : "+m" (v->counter)
-: "ir" (i));
+: "ir" (i) : "memory");
 }
 
 /**
@@ -68,7 +68,7 @@ static __always_inline void arch_atomic_sub(int i, atomic_t 
*v)
 {
asm volatile(LOCK_PREFIX "subl %1,%0"
 : "+m" (v->counter)
-: "ir" (i));
+: "ir" (i) : "memory");
 }
 
 /**
@@ -95,7 +95,7 @@ static __always_inline bool arch_atomic_sub_and_test(int i, 
atomic_t *v)
 static __always_inline void arch_atomic_inc(atomic_t *v)
 {
asm volatile(LOCK_PREFIX "incl %0"
-: "+m" (v->counter));
+: "+m" (v->counter) :: "memory");
 }
 #define arch_atomic_inc arch_atomic_inc
 
@@ -108,7 +108,7 @@ static __always_inline void arch_atomic_inc(atomic_t *v)
 static __always_inline void arch_atomic_dec(atomic_t *v)
 {
asm volatile(LOCK_PREFIX "decl %0"
-: "+m" (v->counter));
+: "+m" (v->counter) :: "memory");
 }
 #define arch_atomic_dec arch_atomic_dec
 
diff --git a/arch/x86/include/asm/atomic64_64.h 
b/arch/x86/include/asm/atomic64_64.h
index dadc20adba21..5e86c0d68ac1 100644
--- a/arch/x86/include/asm/atomic64_64.h
+++ b/arch/x86/include/asm/atomic64_64.h
@@ -45,7 +45,7 @@ static __always_inline void arch_atomic64_add(long i, 
atomic64_t *v)
 {
asm volatile(LOCK_PREFIX "addq %1,%0"
 : "=m" (v->counter)
-: "er" (i), "m" (v->counter));
+: "er" (i), "m" (v->counter) : "memory");
 }
 
 /**
@@ -59,7 +59,7 @@ static inline void arch_atomic64_sub(long i, atomic64_t *v)
 {
asm volatile(LOCK_PREFIX "subq %1,%0"
 : "=m" (v->counter)
-: "er" (i), "m" (v->counter));
+: "er" (i), "m" (v->counter) : "memory");
 }
 
 /**
@@ -87,7 +87,7 @@ static __always_inline void arch_atomic64_inc(atomic64_t *v)
 {
asm volatile(LOCK_PREFIX "incq %0"
 : "=m" (v->counter)
-: "m" (v->counter));
+: "m" (v->counter) : "memory");
 }
 #define arch_atomic64_inc arch_atomic64_inc
 
@@ -101,7 +101,7 @@ static __always_inline void arch_atomic64_dec(atomic64_t *v)
 {
asm volatile(LOCK_PREFIX "decq %0"
 : "=m" (v->counter)
-: "m" (v->counter));
+: "m" (v->counter) : "memory");
 }
 #define arch_atomic64_dec arch_atomic64_dec
 
diff --git a/arch/x86/include/asm/barrier.h b/arch/x86/include/asm/barrier.h
index 14de0432d288..84f848c2541a 100644
--- a/arch/x86/include/asm/barrier.h
+++ b/arch/x86/include/asm/barrier.h
@@ -80,8 +80,8 @@ do {  

[PATCH AUTOSEL 5.2 113/249] sched/fair: Fix "runnable_avg_yN_inv" not used warnings

2019-07-15 Thread Sasha Levin
From: Qian Cai 

[ Upstream commit 509466b7d480bc5d22e90b9fbe6122ae0e2fbe39 ]

runnable_avg_yN_inv[] is only used in kernel/sched/pelt.c but was
included in several other places because they need other macros all
came from kernel/sched/sched-pelt.h which was generated by
Documentation/scheduler/sched-pelt. As the result, it causes compilation
a lot of warnings,

  kernel/sched/sched-pelt.h:4:18: warning: 'runnable_avg_yN_inv' defined but 
not used [-Wunused-const-variable=]
  kernel/sched/sched-pelt.h:4:18: warning: 'runnable_avg_yN_inv' defined but 
not used [-Wunused-const-variable=]
  kernel/sched/sched-pelt.h:4:18: warning: 'runnable_avg_yN_inv' defined but 
not used [-Wunused-const-variable=]
  ...

Silence it by appending the __maybe_unused attribute for it, so all
generated variables and macros can still be kept in the same file.

Signed-off-by: Qian Cai 
Signed-off-by: Peter Zijlstra (Intel) 
Cc: Linus Torvalds 
Cc: Peter Zijlstra 
Cc: Thomas Gleixner 
Link: https://lkml.kernel.org/r/1559596304-31581-1-git-send-email-...@lca.pw
Signed-off-by: Ingo Molnar 
Signed-off-by: Sasha Levin 
---
 Documentation/scheduler/sched-pelt.c | 3 ++-
 kernel/sched/sched-pelt.h| 2 +-
 2 files changed, 3 insertions(+), 2 deletions(-)

diff --git a/Documentation/scheduler/sched-pelt.c 
b/Documentation/scheduler/sched-pelt.c
index e4219139386a..7238b355919c 100644
--- a/Documentation/scheduler/sched-pelt.c
+++ b/Documentation/scheduler/sched-pelt.c
@@ -20,7 +20,8 @@ void calc_runnable_avg_yN_inv(void)
int i;
unsigned int x;
 
-   printf("static const u32 runnable_avg_yN_inv[] = {");
+   /* To silence -Wunused-but-set-variable warnings. */
+   printf("static const u32 runnable_avg_yN_inv[] __maybe_unused = {");
for (i = 0; i < HALFLIFE; i++) {
x = ((1UL<<32)-1)*pow(y, i);
 
diff --git a/kernel/sched/sched-pelt.h b/kernel/sched/sched-pelt.h
index a26473674fb7..c529706bed11 100644
--- a/kernel/sched/sched-pelt.h
+++ b/kernel/sched/sched-pelt.h
@@ -1,7 +1,7 @@
 /* SPDX-License-Identifier: GPL-2.0 */
 /* Generated by Documentation/scheduler/sched-pelt; do not modify. */
 
-static const u32 runnable_avg_yN_inv[] = {
+static const u32 runnable_avg_yN_inv[] __maybe_unused = {
0x, 0xfa83b2da, 0xf5257d14, 0xefe4b99a, 0xeac0c6e6, 0xe5b906e6,
0xe0ccdeeb, 0xdbfbb796, 0xd744fcc9, 0xd2a81d91, 0xce248c14, 0xc9b9bd85,
0xc5672a10, 0xc12c4cc9, 0xbd08a39e, 0xb8fbaf46, 0xb504f333, 0xb123f581,
-- 
2.20.1



[PATCH AUTOSEL 5.1 103/219] x86/atomic: Fix smp_mb__{before,after}_atomic()

2019-07-15 Thread Sasha Levin
From: Peter Zijlstra 

[ Upstream commit 69d927bba39517d0980462efc051875b7f4db185 ]

Recent probing at the Linux Kernel Memory Model uncovered a
'surprise'. Strongly ordered architectures where the atomic RmW
primitive implies full memory ordering and
smp_mb__{before,after}_atomic() are a simple barrier() (such as x86)
fail for:

*x = 1;
atomic_inc(u);
smp_mb__after_atomic();
r0 = *y;

Because, while the atomic_inc() implies memory order, it
(surprisingly) does not provide a compiler barrier. This then allows
the compiler to re-order like so:

atomic_inc(u);
*x = 1;
smp_mb__after_atomic();
r0 = *y;

Which the CPU is then allowed to re-order (under TSO rules) like:

atomic_inc(u);
r0 = *y;
*x = 1;

And this very much was not intended. Therefore strengthen the atomic
RmW ops to include a compiler barrier.

NOTE: atomic_{or,and,xor} and the bitops already had the compiler
barrier.

Signed-off-by: Peter Zijlstra (Intel) 
Cc: Linus Torvalds 
Cc: Peter Zijlstra 
Cc: Thomas Gleixner 
Signed-off-by: Ingo Molnar 
Signed-off-by: Sasha Levin 
---
 Documentation/atomic_t.txt | 3 +++
 arch/x86/include/asm/atomic.h  | 8 
 arch/x86/include/asm/atomic64_64.h | 8 
 arch/x86/include/asm/barrier.h | 4 ++--
 4 files changed, 13 insertions(+), 10 deletions(-)

diff --git a/Documentation/atomic_t.txt b/Documentation/atomic_t.txt
index 913396ac5824..ed0d814df7e0 100644
--- a/Documentation/atomic_t.txt
+++ b/Documentation/atomic_t.txt
@@ -177,6 +177,9 @@ These helper barriers exist because architectures have 
varying implicit
 ordering on their SMP atomic primitives. For example our TSO architectures
 provide full ordered atomics and these barriers are no-ops.
 
+NOTE: when the atomic RmW ops are fully ordered, they should also imply a
+compiler barrier.
+
 Thus:
 
   atomic_fetch_add();
diff --git a/arch/x86/include/asm/atomic.h b/arch/x86/include/asm/atomic.h
index ea3d95275b43..115127c7ad28 100644
--- a/arch/x86/include/asm/atomic.h
+++ b/arch/x86/include/asm/atomic.h
@@ -54,7 +54,7 @@ static __always_inline void arch_atomic_add(int i, atomic_t 
*v)
 {
asm volatile(LOCK_PREFIX "addl %1,%0"
 : "+m" (v->counter)
-: "ir" (i));
+: "ir" (i) : "memory");
 }
 
 /**
@@ -68,7 +68,7 @@ static __always_inline void arch_atomic_sub(int i, atomic_t 
*v)
 {
asm volatile(LOCK_PREFIX "subl %1,%0"
 : "+m" (v->counter)
-: "ir" (i));
+: "ir" (i) : "memory");
 }
 
 /**
@@ -95,7 +95,7 @@ static __always_inline bool arch_atomic_sub_and_test(int i, 
atomic_t *v)
 static __always_inline void arch_atomic_inc(atomic_t *v)
 {
asm volatile(LOCK_PREFIX "incl %0"
-: "+m" (v->counter));
+: "+m" (v->counter) :: "memory");
 }
 #define arch_atomic_inc arch_atomic_inc
 
@@ -108,7 +108,7 @@ static __always_inline void arch_atomic_inc(atomic_t *v)
 static __always_inline void arch_atomic_dec(atomic_t *v)
 {
asm volatile(LOCK_PREFIX "decl %0"
-: "+m" (v->counter));
+: "+m" (v->counter) :: "memory");
 }
 #define arch_atomic_dec arch_atomic_dec
 
diff --git a/arch/x86/include/asm/atomic64_64.h 
b/arch/x86/include/asm/atomic64_64.h
index dadc20adba21..5e86c0d68ac1 100644
--- a/arch/x86/include/asm/atomic64_64.h
+++ b/arch/x86/include/asm/atomic64_64.h
@@ -45,7 +45,7 @@ static __always_inline void arch_atomic64_add(long i, 
atomic64_t *v)
 {
asm volatile(LOCK_PREFIX "addq %1,%0"
 : "=m" (v->counter)
-: "er" (i), "m" (v->counter));
+: "er" (i), "m" (v->counter) : "memory");
 }
 
 /**
@@ -59,7 +59,7 @@ static inline void arch_atomic64_sub(long i, atomic64_t *v)
 {
asm volatile(LOCK_PREFIX "subq %1,%0"
 : "=m" (v->counter)
-: "er" (i), "m" (v->counter));
+: "er" (i), "m" (v->counter) : "memory");
 }
 
 /**
@@ -87,7 +87,7 @@ static __always_inline void arch_atomic64_inc(atomic64_t *v)
 {
asm volatile(LOCK_PREFIX "incq %0"
 : "=m" (v->counter)
-: "m" (v->counter));
+: "m" (v->counter) : "memory");
 }
 #define arch_atomic64_inc arch_atomic64_inc
 
@@ -101,7 +101,7 @@ static __always_inline void arch_atomic64_dec(atomic64_t *v)
 {
asm volatile(LOCK_PREFIX "decq %0"
 : "=m" (v->counter)
-: "m" (v->counter));
+: "m" (v->counter) : "memory");
 }
 #define arch_atomic64_dec arch_atomic64_dec
 
diff --git a/arch/x86/include/asm/barrier.h b/arch/x86/include/asm/barrier.h
index 14de0432d288..84f848c2541a 100644
--- a/arch/x86/include/asm/barrier.h
+++ b/arch/x86/include/asm/barrier.h
@@ -80,8 +80,8 @@ do {  

[PATCH AUTOSEL 4.19 077/158] x86/atomic: Fix smp_mb__{before,after}_atomic()

2019-07-15 Thread Sasha Levin
From: Peter Zijlstra 

[ Upstream commit 69d927bba39517d0980462efc051875b7f4db185 ]

Recent probing at the Linux Kernel Memory Model uncovered a
'surprise'. Strongly ordered architectures where the atomic RmW
primitive implies full memory ordering and
smp_mb__{before,after}_atomic() are a simple barrier() (such as x86)
fail for:

*x = 1;
atomic_inc(u);
smp_mb__after_atomic();
r0 = *y;

Because, while the atomic_inc() implies memory order, it
(surprisingly) does not provide a compiler barrier. This then allows
the compiler to re-order like so:

atomic_inc(u);
*x = 1;
smp_mb__after_atomic();
r0 = *y;

Which the CPU is then allowed to re-order (under TSO rules) like:

atomic_inc(u);
r0 = *y;
*x = 1;

And this very much was not intended. Therefore strengthen the atomic
RmW ops to include a compiler barrier.

NOTE: atomic_{or,and,xor} and the bitops already had the compiler
barrier.

Signed-off-by: Peter Zijlstra (Intel) 
Cc: Linus Torvalds 
Cc: Peter Zijlstra 
Cc: Thomas Gleixner 
Signed-off-by: Ingo Molnar 
Signed-off-by: Sasha Levin 
---
 Documentation/atomic_t.txt | 3 +++
 arch/x86/include/asm/atomic.h  | 8 
 arch/x86/include/asm/atomic64_64.h | 8 
 arch/x86/include/asm/barrier.h | 4 ++--
 4 files changed, 13 insertions(+), 10 deletions(-)

diff --git a/Documentation/atomic_t.txt b/Documentation/atomic_t.txt
index 913396ac5824..ed0d814df7e0 100644
--- a/Documentation/atomic_t.txt
+++ b/Documentation/atomic_t.txt
@@ -177,6 +177,9 @@ These helper barriers exist because architectures have 
varying implicit
 ordering on their SMP atomic primitives. For example our TSO architectures
 provide full ordered atomics and these barriers are no-ops.
 
+NOTE: when the atomic RmW ops are fully ordered, they should also imply a
+compiler barrier.
+
 Thus:
 
   atomic_fetch_add();
diff --git a/arch/x86/include/asm/atomic.h b/arch/x86/include/asm/atomic.h
index ce84388e540c..d266a4066289 100644
--- a/arch/x86/include/asm/atomic.h
+++ b/arch/x86/include/asm/atomic.h
@@ -54,7 +54,7 @@ static __always_inline void arch_atomic_add(int i, atomic_t 
*v)
 {
asm volatile(LOCK_PREFIX "addl %1,%0"
 : "+m" (v->counter)
-: "ir" (i));
+: "ir" (i) : "memory");
 }
 
 /**
@@ -68,7 +68,7 @@ static __always_inline void arch_atomic_sub(int i, atomic_t 
*v)
 {
asm volatile(LOCK_PREFIX "subl %1,%0"
 : "+m" (v->counter)
-: "ir" (i));
+: "ir" (i) : "memory");
 }
 
 /**
@@ -95,7 +95,7 @@ static __always_inline bool arch_atomic_sub_and_test(int i, 
atomic_t *v)
 static __always_inline void arch_atomic_inc(atomic_t *v)
 {
asm volatile(LOCK_PREFIX "incl %0"
-: "+m" (v->counter));
+: "+m" (v->counter) :: "memory");
 }
 #define arch_atomic_inc arch_atomic_inc
 
@@ -108,7 +108,7 @@ static __always_inline void arch_atomic_inc(atomic_t *v)
 static __always_inline void arch_atomic_dec(atomic_t *v)
 {
asm volatile(LOCK_PREFIX "decl %0"
-: "+m" (v->counter));
+: "+m" (v->counter) :: "memory");
 }
 #define arch_atomic_dec arch_atomic_dec
 
diff --git a/arch/x86/include/asm/atomic64_64.h 
b/arch/x86/include/asm/atomic64_64.h
index 5f851d92eecd..55ca027f8c1c 100644
--- a/arch/x86/include/asm/atomic64_64.h
+++ b/arch/x86/include/asm/atomic64_64.h
@@ -45,7 +45,7 @@ static __always_inline void arch_atomic64_add(long i, 
atomic64_t *v)
 {
asm volatile(LOCK_PREFIX "addq %1,%0"
 : "=m" (v->counter)
-: "er" (i), "m" (v->counter));
+: "er" (i), "m" (v->counter) : "memory");
 }
 
 /**
@@ -59,7 +59,7 @@ static inline void arch_atomic64_sub(long i, atomic64_t *v)
 {
asm volatile(LOCK_PREFIX "subq %1,%0"
 : "=m" (v->counter)
-: "er" (i), "m" (v->counter));
+: "er" (i), "m" (v->counter) : "memory");
 }
 
 /**
@@ -87,7 +87,7 @@ static __always_inline void arch_atomic64_inc(atomic64_t *v)
 {
asm volatile(LOCK_PREFIX "incq %0"
 : "=m" (v->counter)
-: "m" (v->counter));
+: "m" (v->counter) : "memory");
 }
 #define arch_atomic64_inc arch_atomic64_inc
 
@@ -101,7 +101,7 @@ static __always_inline void arch_atomic64_dec(atomic64_t *v)
 {
asm volatile(LOCK_PREFIX "decq %0"
 : "=m" (v->counter)
-: "m" (v->counter));
+: "m" (v->counter) : "memory");
 }
 #define arch_atomic64_dec arch_atomic64_dec
 
diff --git a/arch/x86/include/asm/barrier.h b/arch/x86/include/asm/barrier.h
index 14de0432d288..84f848c2541a 100644
--- a/arch/x86/include/asm/barrier.h
+++ b/arch/x86/include/asm/barrier.h
@@ -80,8 +80,8 @@ do {  

[PATCH AUTOSEL 4.14 053/105] sched/fair: Fix "runnable_avg_yN_inv" not used warnings

2019-07-15 Thread Sasha Levin
From: Qian Cai 

[ Upstream commit 509466b7d480bc5d22e90b9fbe6122ae0e2fbe39 ]

runnable_avg_yN_inv[] is only used in kernel/sched/pelt.c but was
included in several other places because they need other macros all
came from kernel/sched/sched-pelt.h which was generated by
Documentation/scheduler/sched-pelt. As the result, it causes compilation
a lot of warnings,

  kernel/sched/sched-pelt.h:4:18: warning: 'runnable_avg_yN_inv' defined but 
not used [-Wunused-const-variable=]
  kernel/sched/sched-pelt.h:4:18: warning: 'runnable_avg_yN_inv' defined but 
not used [-Wunused-const-variable=]
  kernel/sched/sched-pelt.h:4:18: warning: 'runnable_avg_yN_inv' defined but 
not used [-Wunused-const-variable=]
  ...

Silence it by appending the __maybe_unused attribute for it, so all
generated variables and macros can still be kept in the same file.

Signed-off-by: Qian Cai 
Signed-off-by: Peter Zijlstra (Intel) 
Cc: Linus Torvalds 
Cc: Peter Zijlstra 
Cc: Thomas Gleixner 
Link: https://lkml.kernel.org/r/1559596304-31581-1-git-send-email-...@lca.pw
Signed-off-by: Ingo Molnar 
Signed-off-by: Sasha Levin 
---
 Documentation/scheduler/sched-pelt.c | 3 ++-
 kernel/sched/sched-pelt.h| 2 +-
 2 files changed, 3 insertions(+), 2 deletions(-)

diff --git a/Documentation/scheduler/sched-pelt.c 
b/Documentation/scheduler/sched-pelt.c
index e4219139386a..7238b355919c 100644
--- a/Documentation/scheduler/sched-pelt.c
+++ b/Documentation/scheduler/sched-pelt.c
@@ -20,7 +20,8 @@ void calc_runnable_avg_yN_inv(void)
int i;
unsigned int x;
 
-   printf("static const u32 runnable_avg_yN_inv[] = {");
+   /* To silence -Wunused-but-set-variable warnings. */
+   printf("static const u32 runnable_avg_yN_inv[] __maybe_unused = {");
for (i = 0; i < HALFLIFE; i++) {
x = ((1UL<<32)-1)*pow(y, i);
 
diff --git a/kernel/sched/sched-pelt.h b/kernel/sched/sched-pelt.h
index a26473674fb7..c529706bed11 100644
--- a/kernel/sched/sched-pelt.h
+++ b/kernel/sched/sched-pelt.h
@@ -1,7 +1,7 @@
 /* SPDX-License-Identifier: GPL-2.0 */
 /* Generated by Documentation/scheduler/sched-pelt; do not modify. */
 
-static const u32 runnable_avg_yN_inv[] = {
+static const u32 runnable_avg_yN_inv[] __maybe_unused = {
0x, 0xfa83b2da, 0xf5257d14, 0xefe4b99a, 0xeac0c6e6, 0xe5b906e6,
0xe0ccdeeb, 0xdbfbb796, 0xd744fcc9, 0xd2a81d91, 0xce248c14, 0xc9b9bd85,
0xc5672a10, 0xc12c4cc9, 0xbd08a39e, 0xb8fbaf46, 0xb504f333, 0xb123f581,
-- 
2.20.1



[PATCH AUTOSEL 4.14 054/105] x86/atomic: Fix smp_mb__{before,after}_atomic()

2019-07-15 Thread Sasha Levin
From: Peter Zijlstra 

[ Upstream commit 69d927bba39517d0980462efc051875b7f4db185 ]

Recent probing at the Linux Kernel Memory Model uncovered a
'surprise'. Strongly ordered architectures where the atomic RmW
primitive implies full memory ordering and
smp_mb__{before,after}_atomic() are a simple barrier() (such as x86)
fail for:

*x = 1;
atomic_inc(u);
smp_mb__after_atomic();
r0 = *y;

Because, while the atomic_inc() implies memory order, it
(surprisingly) does not provide a compiler barrier. This then allows
the compiler to re-order like so:

atomic_inc(u);
*x = 1;
smp_mb__after_atomic();
r0 = *y;

Which the CPU is then allowed to re-order (under TSO rules) like:

atomic_inc(u);
r0 = *y;
*x = 1;

And this very much was not intended. Therefore strengthen the atomic
RmW ops to include a compiler barrier.

NOTE: atomic_{or,and,xor} and the bitops already had the compiler
barrier.

Signed-off-by: Peter Zijlstra (Intel) 
Cc: Linus Torvalds 
Cc: Peter Zijlstra 
Cc: Thomas Gleixner 
Signed-off-by: Ingo Molnar 
Signed-off-by: Sasha Levin 
---
 Documentation/atomic_t.txt | 3 +++
 arch/x86/include/asm/atomic.h  | 8 
 arch/x86/include/asm/atomic64_64.h | 8 
 arch/x86/include/asm/barrier.h | 4 ++--
 4 files changed, 13 insertions(+), 10 deletions(-)

diff --git a/Documentation/atomic_t.txt b/Documentation/atomic_t.txt
index 913396ac5824..ed0d814df7e0 100644
--- a/Documentation/atomic_t.txt
+++ b/Documentation/atomic_t.txt
@@ -177,6 +177,9 @@ These helper barriers exist because architectures have 
varying implicit
 ordering on their SMP atomic primitives. For example our TSO architectures
 provide full ordered atomics and these barriers are no-ops.
 
+NOTE: when the atomic RmW ops are fully ordered, they should also imply a
+compiler barrier.
+
 Thus:
 
   atomic_fetch_add();
diff --git a/arch/x86/include/asm/atomic.h b/arch/x86/include/asm/atomic.h
index 72759f131cc5..d09dd91dd0b6 100644
--- a/arch/x86/include/asm/atomic.h
+++ b/arch/x86/include/asm/atomic.h
@@ -50,7 +50,7 @@ static __always_inline void atomic_add(int i, atomic_t *v)
 {
asm volatile(LOCK_PREFIX "addl %1,%0"
 : "+m" (v->counter)
-: "ir" (i));
+: "ir" (i) : "memory");
 }
 
 /**
@@ -64,7 +64,7 @@ static __always_inline void atomic_sub(int i, atomic_t *v)
 {
asm volatile(LOCK_PREFIX "subl %1,%0"
 : "+m" (v->counter)
-: "ir" (i));
+: "ir" (i) : "memory");
 }
 
 /**
@@ -90,7 +90,7 @@ static __always_inline bool atomic_sub_and_test(int i, 
atomic_t *v)
 static __always_inline void atomic_inc(atomic_t *v)
 {
asm volatile(LOCK_PREFIX "incl %0"
-: "+m" (v->counter));
+: "+m" (v->counter) :: "memory");
 }
 
 /**
@@ -102,7 +102,7 @@ static __always_inline void atomic_inc(atomic_t *v)
 static __always_inline void atomic_dec(atomic_t *v)
 {
asm volatile(LOCK_PREFIX "decl %0"
-: "+m" (v->counter));
+: "+m" (v->counter) :: "memory");
 }
 
 /**
diff --git a/arch/x86/include/asm/atomic64_64.h 
b/arch/x86/include/asm/atomic64_64.h
index 738495caf05f..e6fad6bbb2ee 100644
--- a/arch/x86/include/asm/atomic64_64.h
+++ b/arch/x86/include/asm/atomic64_64.h
@@ -45,7 +45,7 @@ static __always_inline void atomic64_add(long i, atomic64_t 
*v)
 {
asm volatile(LOCK_PREFIX "addq %1,%0"
 : "=m" (v->counter)
-: "er" (i), "m" (v->counter));
+: "er" (i), "m" (v->counter) : "memory");
 }
 
 /**
@@ -59,7 +59,7 @@ static inline void atomic64_sub(long i, atomic64_t *v)
 {
asm volatile(LOCK_PREFIX "subq %1,%0"
 : "=m" (v->counter)
-: "er" (i), "m" (v->counter));
+: "er" (i), "m" (v->counter) : "memory");
 }
 
 /**
@@ -86,7 +86,7 @@ static __always_inline void atomic64_inc(atomic64_t *v)
 {
asm volatile(LOCK_PREFIX "incq %0"
 : "=m" (v->counter)
-: "m" (v->counter));
+: "m" (v->counter) : "memory");
 }
 
 /**
@@ -99,7 +99,7 @@ static __always_inline void atomic64_dec(atomic64_t *v)
 {
asm volatile(LOCK_PREFIX "decq %0"
 : "=m" (v->counter)
-: "m" (v->counter));
+: "m" (v->counter) : "memory");
 }
 
 /**
diff --git a/arch/x86/include/asm/barrier.h b/arch/x86/include/asm/barrier.h
index a04f0c242a28..bc88797cfa61 100644
--- a/arch/x86/include/asm/barrier.h
+++ b/arch/x86/include/asm/barrier.h
@@ -106,8 +106,8 @@ do {
\
 #endif
 
 /* Atomic operations are already serializing on x86 */
-#define __smp_mb__before_atomic()  barrier()
-#define __smp_mb__after_atomic()   barrier()
+#define __smp_mb__before_atom

[PATCH 7/9] x86/pci: Pass lockdep condition to pcm_mmcfg_list iterator (v1)

2019-07-15 Thread Joel Fernandes (Google)
The pcm_mmcfg_list is traversed with list_for_each_entry_rcu without a
reader-lock held, because the pci_mmcfg_lock is already held. Make this
known to the list macro so that it fixes new lockdep warnings that
trigger due to lockdep checks added to list_for_each_entry_rcu().

Signed-off-by: Joel Fernandes (Google) 
---
 arch/x86/pci/mmconfig-shared.c | 5 +++--
 1 file changed, 3 insertions(+), 2 deletions(-)

diff --git a/arch/x86/pci/mmconfig-shared.c b/arch/x86/pci/mmconfig-shared.c
index 7389db538c30..6fa42e9c4e6f 100644
--- a/arch/x86/pci/mmconfig-shared.c
+++ b/arch/x86/pci/mmconfig-shared.c
@@ -29,6 +29,7 @@
 static bool pci_mmcfg_running_state;
 static bool pci_mmcfg_arch_init_failed;
 static DEFINE_MUTEX(pci_mmcfg_lock);
+#define pci_mmcfg_lock_held() lock_is_held(&(pci_mmcfg_lock).dep_map)
 
 LIST_HEAD(pci_mmcfg_list);
 
@@ -54,7 +55,7 @@ static void list_add_sorted(struct pci_mmcfg_region *new)
struct pci_mmcfg_region *cfg;
 
/* keep list sorted by segment and starting bus number */
-   list_for_each_entry_rcu(cfg, &pci_mmcfg_list, list) {
+   list_for_each_entry_rcu(cfg, &pci_mmcfg_list, list, 
pci_mmcfg_lock_held()) {
if (cfg->segment > new->segment ||
(cfg->segment == new->segment &&
 cfg->start_bus >= new->start_bus)) {
@@ -118,7 +119,7 @@ struct pci_mmcfg_region *pci_mmconfig_lookup(int segment, 
int bus)
 {
struct pci_mmcfg_region *cfg;
 
-   list_for_each_entry_rcu(cfg, &pci_mmcfg_list, list)
+   list_for_each_entry_rcu(cfg, &pci_mmcfg_list, list, 
pci_mmcfg_lock_held())
if (cfg->segment == segment &&
cfg->start_bus <= bus && bus <= cfg->end_bus)
return cfg;
-- 
2.22.0.510.g264f2c817a-goog



[PATCH 5/9] driver/core: Convert to use built-in RCU list checking (v1)

2019-07-15 Thread Joel Fernandes (Google)
list_for_each_entry_rcu has built-in RCU and lock checking. Make use of
it in driver core.

Acked-by: Greg Kroah-Hartman 
Signed-off-by: Joel Fernandes (Google) 
---
 drivers/base/base.h  |  1 +
 drivers/base/core.c  | 10 ++
 drivers/base/power/runtime.c | 15 ++-
 3 files changed, 21 insertions(+), 5 deletions(-)

diff --git a/drivers/base/base.h b/drivers/base/base.h
index b405436ee28e..0d32544b6f91 100644
--- a/drivers/base/base.h
+++ b/drivers/base/base.h
@@ -165,6 +165,7 @@ static inline int devtmpfs_init(void) { return 0; }
 /* Device links support */
 extern int device_links_read_lock(void);
 extern void device_links_read_unlock(int idx);
+extern int device_links_read_lock_held(void);
 extern int device_links_check_suppliers(struct device *dev);
 extern void device_links_driver_bound(struct device *dev);
 extern void device_links_driver_cleanup(struct device *dev);
diff --git a/drivers/base/core.c b/drivers/base/core.c
index da84a73f2ba6..85e82f38717f 100644
--- a/drivers/base/core.c
+++ b/drivers/base/core.c
@@ -68,6 +68,11 @@ void device_links_read_unlock(int idx)
 {
srcu_read_unlock(&device_links_srcu, idx);
 }
+
+int device_links_read_lock_held(void)
+{
+   return srcu_read_lock_held(&device_links_srcu);
+}
 #else /* !CONFIG_SRCU */
 static DECLARE_RWSEM(device_links_lock);
 
@@ -91,6 +96,11 @@ void device_links_read_unlock(int not_used)
 {
up_read(&device_links_lock);
 }
+
+int device_links_read_lock_held(void)
+{
+   return lock_is_held(&device_links_lock);
+}
 #endif /* !CONFIG_SRCU */
 
 /**
diff --git a/drivers/base/power/runtime.c b/drivers/base/power/runtime.c
index 952a1e7057c7..7a10e8379a70 100644
--- a/drivers/base/power/runtime.c
+++ b/drivers/base/power/runtime.c
@@ -287,7 +287,8 @@ static int rpm_get_suppliers(struct device *dev)
 {
struct device_link *link;
 
-   list_for_each_entry_rcu(link, &dev->links.suppliers, c_node) {
+   list_for_each_entry_rcu(link, &dev->links.suppliers, c_node,
+   device_links_read_lock_held()) {
int retval;
 
if (!(link->flags & DL_FLAG_PM_RUNTIME) ||
@@ -309,7 +310,8 @@ static void rpm_put_suppliers(struct device *dev)
 {
struct device_link *link;
 
-   list_for_each_entry_rcu(link, &dev->links.suppliers, c_node) {
+   list_for_each_entry_rcu(link, &dev->links.suppliers, c_node,
+   device_links_read_lock_held()) {
if (READ_ONCE(link->status) == DL_STATE_SUPPLIER_UNBIND)
continue;
 
@@ -1640,7 +1642,8 @@ void pm_runtime_clean_up_links(struct device *dev)
 
idx = device_links_read_lock();
 
-   list_for_each_entry_rcu(link, &dev->links.consumers, s_node) {
+   list_for_each_entry_rcu(link, &dev->links.consumers, s_node,
+   device_links_read_lock_held()) {
if (link->flags & DL_FLAG_STATELESS)
continue;
 
@@ -1662,7 +1665,8 @@ void pm_runtime_get_suppliers(struct device *dev)
 
idx = device_links_read_lock();
 
-   list_for_each_entry_rcu(link, &dev->links.suppliers, c_node)
+   list_for_each_entry_rcu(link, &dev->links.suppliers, c_node,
+   device_links_read_lock_held())
if (link->flags & DL_FLAG_PM_RUNTIME) {
link->supplier_preactivated = true;
refcount_inc(&link->rpm_active);
@@ -1683,7 +1687,8 @@ void pm_runtime_put_suppliers(struct device *dev)
 
idx = device_links_read_lock();
 
-   list_for_each_entry_rcu(link, &dev->links.suppliers, c_node)
+   list_for_each_entry_rcu(link, &dev->links.suppliers, c_node,
+   device_links_read_lock_held())
if (link->supplier_preactivated) {
link->supplier_preactivated = false;
if (refcount_dec_not_one(&link->rpm_active))
-- 
2.22.0.510.g264f2c817a-goog



[PATCH 9/9] doc: Update documentation about list_for_each_entry_rcu (v1)

2019-07-15 Thread Joel Fernandes (Google)
This patch updates the documentation with information about
usage of lockdep with list_for_each_entry_rcu().

Signed-off-by: Joel Fernandes (Google) 
---
 Documentation/RCU/lockdep.txt   | 15 +++
 Documentation/RCU/whatisRCU.txt |  9 -
 2 files changed, 19 insertions(+), 5 deletions(-)

diff --git a/Documentation/RCU/lockdep.txt b/Documentation/RCU/lockdep.txt
index da51d3068850..3d967df3a801 100644
--- a/Documentation/RCU/lockdep.txt
+++ b/Documentation/RCU/lockdep.txt
@@ -96,7 +96,14 @@ other flavors of rcu_dereference().  On the other hand, it 
is illegal
 to use rcu_dereference_protected() if either the RCU-protected pointer
 or the RCU-protected data that it points to can change concurrently.
 
-There are currently only "universal" versions of the rcu_assign_pointer()
-and RCU list-/tree-traversal primitives, which do not (yet) check for
-being in an RCU read-side critical section.  In the future, separate
-versions of these primitives might be created.
+Similar to rcu_dereference_protected, The RCU list and hlist traversal
+primitives also check for whether there are called from within a reader
+section. However, an optional lockdep expression can be passed to them as
+the last argument in case they are called under other non-RCU protection.
+
+For example, the workqueue for_each_pwq() macro is implemented as follows.
+It is safe to call for_each_pwq() outside a reader section but under protection
+of wq->mutex:
+#define for_each_pwq(pwq, wq)
+   list_for_each_entry_rcu((pwq), &(wq)->pwqs, pwqs_node,
+   lock_is_held(&(wq->mutex).dep_map))
diff --git a/Documentation/RCU/whatisRCU.txt b/Documentation/RCU/whatisRCU.txt
index 7e1a8721637a..00fe77ede1e2 100644
--- a/Documentation/RCU/whatisRCU.txt
+++ b/Documentation/RCU/whatisRCU.txt
@@ -290,7 +290,7 @@ rcu_dereference()
at any time, including immediately after the rcu_dereference().
And, again like rcu_assign_pointer(), rcu_dereference() is
typically used indirectly, via the _rcu list-manipulation
-   primitives, such as list_for_each_entry_rcu().
+   primitives, such as list_for_each_entry_rcu() [2].
 
[1] The variant rcu_dereference_protected() can be used outside
of an RCU read-side critical section as long as the usage is
@@ -305,6 +305,13 @@ rcu_dereference()
a lockdep splat is emitted.  See 
RCU/Design/Requirements/Requirements.html
and the API's code comments for more details and example usage.
 
+   [2] In case the list_for_each_entry_rcu() primitive is intended
+   to be used outside of an RCU reader section such as when
+   protected by a lock, then an additional lockdep expression can be
+   passed as the last argument to it so that RCU lockdep checking code
+   knows that the dereference of the list pointers are safe. If the
+   indicated protection is not provided, a lockdep splat is emitted.
+
 The following diagram shows how each API communicates among the
 reader, updater, and reclaimer.
 
-- 
2.22.0.510.g264f2c817a-goog



[PATCH 8/9] acpi: Use built-in RCU list checking for acpi_ioremaps list (v1)

2019-07-15 Thread Joel Fernandes (Google)
list_for_each_entry_rcu has built-in RCU and lock checking. Make use of
it for acpi_ioremaps list traversal.

Signed-off-by: Joel Fernandes (Google) 
---
 drivers/acpi/osl.c | 6 --
 1 file changed, 4 insertions(+), 2 deletions(-)

diff --git a/drivers/acpi/osl.c b/drivers/acpi/osl.c
index 9c0edf2fc0dd..2f9d0d20b836 100644
--- a/drivers/acpi/osl.c
+++ b/drivers/acpi/osl.c
@@ -14,6 +14,7 @@
 #include 
 #include 
 #include 
+#include 
 #include 
 #include 
 #include 
@@ -80,6 +81,7 @@ struct acpi_ioremap {
 
 static LIST_HEAD(acpi_ioremaps);
 static DEFINE_MUTEX(acpi_ioremap_lock);
+#define acpi_ioremap_lock_held() lock_is_held(&acpi_ioremap_lock.dep_map)
 
 static void __init acpi_request_region (struct acpi_generic_address *gas,
unsigned int length, char *desc)
@@ -206,7 +208,7 @@ acpi_map_lookup(acpi_physical_address phys, acpi_size size)
 {
struct acpi_ioremap *map;
 
-   list_for_each_entry_rcu(map, &acpi_ioremaps, list)
+   list_for_each_entry_rcu(map, &acpi_ioremaps, list, 
acpi_ioremap_lock_held())
if (map->phys <= phys &&
phys + size <= map->phys + map->size)
return map;
@@ -249,7 +251,7 @@ acpi_map_lookup_virt(void __iomem *virt, acpi_size size)
 {
struct acpi_ioremap *map;
 
-   list_for_each_entry_rcu(map, &acpi_ioremaps, list)
+   list_for_each_entry_rcu(map, &acpi_ioremaps, list, 
acpi_ioremap_lock_held())
if (map->virt <= virt &&
virt + size <= map->virt + map->size)
return map;
-- 
2.22.0.510.g264f2c817a-goog



[PATCH 6/9] workqueue: Convert for_each_wq to use built-in list check (v2)

2019-07-15 Thread Joel Fernandes (Google)
list_for_each_entry_rcu now has support to check for RCU reader sections
as well as lock. Just use the support in it, instead of explictly
checking in the caller.

Signed-off-by: Joel Fernandes (Google) 
---
 kernel/workqueue.c | 10 ++
 1 file changed, 2 insertions(+), 8 deletions(-)

diff --git a/kernel/workqueue.c b/kernel/workqueue.c
index 601d61150b65..e882477ebf6e 100644
--- a/kernel/workqueue.c
+++ b/kernel/workqueue.c
@@ -364,11 +364,6 @@ static void workqueue_sysfs_unregister(struct 
workqueue_struct *wq);
 !lockdep_is_held(&wq_pool_mutex),  \
 "RCU or wq_pool_mutex should be held")
 
-#define assert_rcu_or_wq_mutex(wq) \
-   RCU_LOCKDEP_WARN(!rcu_read_lock_held() &&   \
-!lockdep_is_held(&wq->mutex),  \
-"RCU or wq->mutex should be held")
-
 #define assert_rcu_or_wq_mutex_or_pool_mutex(wq)   \
RCU_LOCKDEP_WARN(!rcu_read_lock_held() &&   \
 !lockdep_is_held(&wq->mutex) &&\
@@ -425,9 +420,8 @@ static void workqueue_sysfs_unregister(struct 
workqueue_struct *wq);
  * ignored.
  */
 #define for_each_pwq(pwq, wq)  \
-   list_for_each_entry_rcu((pwq), &(wq)->pwqs, pwqs_node)  \
-   if (({ assert_rcu_or_wq_mutex(wq); false; })) { }   \
-   else
+   list_for_each_entry_rcu((pwq), &(wq)->pwqs, pwqs_node,  \
+lock_is_held(&(wq->mutex).dep_map))
 
 #ifdef CONFIG_DEBUG_OBJECTS_WORK
 
-- 
2.22.0.510.g264f2c817a-goog



[PATCH 4/9] ipv4: add lockdep condition to fix for_each_entry (v1)

2019-07-15 Thread Joel Fernandes (Google)
Using the previous support added, use it for adding lockdep conditions
to list usage here.

Signed-off-by: Joel Fernandes (Google) 
---
 net/ipv4/fib_frontend.c | 3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)

diff --git a/net/ipv4/fib_frontend.c b/net/ipv4/fib_frontend.c
index 317339cd7f03..26b0fb24e2c2 100644
--- a/net/ipv4/fib_frontend.c
+++ b/net/ipv4/fib_frontend.c
@@ -124,7 +124,8 @@ struct fib_table *fib_get_table(struct net *net, u32 id)
h = id & (FIB_TABLE_HASHSZ - 1);
 
head = &net->ipv4.fib_table_hash[h];
-   hlist_for_each_entry_rcu(tb, head, tb_hlist) {
+   hlist_for_each_entry_rcu(tb, head, tb_hlist,
+lockdep_rtnl_is_held()) {
if (tb->tb_id == id)
return tb;
}
-- 
2.22.0.510.g264f2c817a-goog



[PATCH 3/9] rcu/sync: Remove custom check for reader-section (v2)

2019-07-15 Thread Joel Fernandes (Google)
The rcu/sync code was doing its own check whether we are in a reader
section. With RCU consolidating flavors and the generic helper added in
this series, this is no longer need. We can just use the generic helper
and it results in a nice cleanup.

Cc: Oleg Nesterov 
Acked-by: Oleg Nesterov 
Signed-off-by: Joel Fernandes (Google) 
---
 include/linux/rcu_sync.h | 4 +---
 1 file changed, 1 insertion(+), 3 deletions(-)

diff --git a/include/linux/rcu_sync.h b/include/linux/rcu_sync.h
index 9b83865d24f9..0027d4c8087c 100644
--- a/include/linux/rcu_sync.h
+++ b/include/linux/rcu_sync.h
@@ -31,9 +31,7 @@ struct rcu_sync {
  */
 static inline bool rcu_sync_is_idle(struct rcu_sync *rsp)
 {
-   RCU_LOCKDEP_WARN(!rcu_read_lock_held() &&
-!rcu_read_lock_bh_held() &&
-!rcu_read_lock_sched_held(),
+   RCU_LOCKDEP_WARN(!rcu_read_lock_any_held(),
 "suspicious rcu_sync_is_idle() usage");
return !READ_ONCE(rsp->gp_state); /* GP_IDLE */
 }
-- 
2.22.0.510.g264f2c817a-goog



[PATCH 2/9] rcu: Add support for consolidated-RCU reader checking (v3)

2019-07-15 Thread Joel Fernandes (Google)
This patch adds support for checking RCU reader sections in list
traversal macros. Optionally, if the list macro is called under SRCU or
other lock/mutex protection, then appropriate lockdep expressions can be
passed to make the checks pass.

Existing list_for_each_entry_rcu() invocations don't need to pass the
optional fourth argument (cond) unless they are under some non-RCU
protection and needs to make lockdep check pass.

Signed-off-by: Joel Fernandes (Google) 
---
 include/linux/rculist.h  | 28 -
 include/linux/rcupdate.h |  7 +++
 kernel/rcu/Kconfig.debug | 11 ++
 kernel/rcu/update.c  | 44 
 4 files changed, 67 insertions(+), 23 deletions(-)

diff --git a/include/linux/rculist.h b/include/linux/rculist.h
index e91ec9ddcd30..1048160625bb 100644
--- a/include/linux/rculist.h
+++ b/include/linux/rculist.h
@@ -40,6 +40,20 @@ static inline void INIT_LIST_HEAD_RCU(struct list_head *list)
  */
 #define list_next_rcu(list)(*((struct list_head __rcu **)(&(list)->next)))
 
+/*
+ * Check during list traversal that we are within an RCU reader
+ */
+
+#ifdef CONFIG_PROVE_RCU_LIST
+#define __list_check_rcu(dummy, cond, ...) \
+   ({  \
+   RCU_LOCKDEP_WARN(!cond && !rcu_read_lock_any_held(),\
+"RCU-list traversed in non-reader section!");  \
+})
+#else
+#define __list_check_rcu(dummy, cond, ...) ({})
+#endif
+
 /*
  * Insert a new entry between two known consecutive entries.
  *
@@ -343,14 +357,16 @@ static inline void list_splice_tail_init_rcu(struct 
list_head *list,
  * @pos:   the type * to use as a loop cursor.
  * @head:  the head for your list.
  * @member:the name of the list_head within the struct.
+ * @cond:  optional lockdep expression if called from non-RCU protection.
  *
  * This list-traversal primitive may safely run concurrently with
  * the _rcu list-mutation primitives such as list_add_rcu()
  * as long as the traversal is guarded by rcu_read_lock().
  */
-#define list_for_each_entry_rcu(pos, head, member) \
-   for (pos = list_entry_rcu((head)->next, typeof(*pos), member); \
-   &pos->member != (head); \
+#define list_for_each_entry_rcu(pos, head, member, cond...)\
+   for (__list_check_rcu(dummy, ## cond, 0),   \
+pos = list_entry_rcu((head)->next, typeof(*pos), member);  \
+   &pos->member != (head); \
pos = list_entry_rcu(pos->member.next, typeof(*pos), member))
 
 /**
@@ -616,13 +632,15 @@ static inline void hlist_add_behind_rcu(struct hlist_node 
*n,
  * @pos:   the type * to use as a loop cursor.
  * @head:  the head for your list.
  * @member:the name of the hlist_node within the struct.
+ * @cond:  optional lockdep expression if called from non-RCU protection.
  *
  * This list-traversal primitive may safely run concurrently with
  * the _rcu list-mutation primitives such as hlist_add_head_rcu()
  * as long as the traversal is guarded by rcu_read_lock().
  */
-#define hlist_for_each_entry_rcu(pos, head, member)\
-   for (pos = hlist_entry_safe 
(rcu_dereference_raw(hlist_first_rcu(head)),\
+#define hlist_for_each_entry_rcu(pos, head, member, cond...)   \
+   for (__list_check_rcu(dummy, ## cond, 0),   \
+pos = hlist_entry_safe 
(rcu_dereference_raw(hlist_first_rcu(head)),\
typeof(*(pos)), member);\
pos;\
pos = hlist_entry_safe(rcu_dereference_raw(hlist_next_rcu(\
diff --git a/include/linux/rcupdate.h b/include/linux/rcupdate.h
index 8f7167478c1d..f3c29efdf19a 100644
--- a/include/linux/rcupdate.h
+++ b/include/linux/rcupdate.h
@@ -221,6 +221,7 @@ int debug_lockdep_rcu_enabled(void);
 int rcu_read_lock_held(void);
 int rcu_read_lock_bh_held(void);
 int rcu_read_lock_sched_held(void);
+int rcu_read_lock_any_held(void);
 
 #else /* #ifdef CONFIG_DEBUG_LOCK_ALLOC */
 
@@ -241,6 +242,12 @@ static inline int rcu_read_lock_sched_held(void)
 {
return !preemptible();
 }
+
+static inline int rcu_read_lock_any_held(void)
+{
+   return !preemptible();
+}
+
 #endif /* #else #ifdef CONFIG_DEBUG_LOCK_ALLOC */
 
 #ifdef CONFIG_PROVE_RCU
diff --git a/kernel/rcu/Kconfig.debug b/kernel/rcu/Kconfig.debug
index 5ec3ea4028e2..7fbd21dbfcd0 100644
--- a/kernel/rcu/Kconfig.debug
+++ b/kernel/rcu/Kconfig.debug
@@ -8,6 +8,17 @@ menu "RCU Debugging"
 config PROVE_RCU
def_bool PROVE_LOCKING
 
+config PROVE_RCU_LIST
+   bool "RCU list lockdep debugging"
+   depends on PROVE_RCU
+   default n
+   help
+ Enable RCU lockdep checking for list usages. By default it is
+ turned off since there are seve

[PATCH 0/9] Harden list_for_each_entry_rcu() and family

2019-07-15 Thread Joel Fernandes (Google)
Hi,
This series aims to provide lockdep checking to RCU list macros for additional
kernel hardening.

RCU has a number of primitives for "consumption" of an RCU protected pointer.
Most of the time, these consumers make sure that such accesses are under a RCU
reader-section (such as rcu_dereference{,sched,bh} or under a lock, such as
with rcu_dereference_protected()).

However, there are other ways to consume RCU pointers, such as by
list_for_each_entry_rcu or hlist_for_each_enry_rcu. Unlike the rcu_dereference
family, these consumers do no lockdep checking at all. And with the growing
number of RCU list uses (1000+), it is possible for bugs to creep in and go
unnoticed which lockdep checks can catch.

Since RCU consolidation efforts last year, the different traditional RCU
flavors (preempt, bh, sched) are all consolidated. In other words, any of these
flavors can cause a reader section to occur and all of them must cease before
the reader section is considered to be unlocked. Thanks to this, we can
generically check if we are in an RCU reader. This is what patch 1 does. Note
that the list_for_each_entry_rcu and family are different from the
rcu_dereference family in that, there is no _bh or _sched version of this
macro. They are used under many different RCU reader flavors, and also SRCU.
Patch 1 adds a new internal function rcu_read_lock_any_held() which checks
if any reader section is active at all, when these macros are called. If no
reader section exists, then the optional fourth argument to
list_for_each_entry_rcu() can be a lockdep expression which is evaluated
(similar to how rcu_dereference_check() works). If no lockdep expression is
passed, and we are not in a reader, then a splat occurs. Just take off the
lockdep expression after applying the patches, by using the following diff and
see what happens:

+++ b/arch/x86/pci/mmconfig-shared.c
@@ -55,7 +55,7 @@ static void list_add_sorted(struct pci_mmcfg_region *new)
struct pci_mmcfg_region *cfg;

/* keep list sorted by segment and starting bus number */
-   list_for_each_entry_rcu(cfg, &pci_mmcfg_list, list, 
pci_mmcfg_lock_held()) {
+   list_for_each_entry_rcu(cfg, &pci_mmcfg_list, list) {


The optional argument trick to list_for_each_entry_rcu() can also be used in
the future to possibly remove rcu_dereference_{,bh,sched}_protected() API and
we can pass an optional lockdep expression to rcu_dereference() itself. Thus
eliminating 3 more RCU APIs.

Note that some list macro wrappers already do their own lockdep checking in the
caller side. These can be eliminated in favor of the built-in lockdep checking
in the list macro that this series adds. For example, workqueue code has a
assert_rcu_or_wq_mutex() function which is called in for_each_wq().  This
series replaces that in favor of the built-in check.

Also in the future, we can extend these checks to list_entry_rcu() and other
list macros as well, if needed.

Please note that I have kept this option default-disabled under a new config:
CONFIG_PROVE_RCU_LIST. This is so that until all users are converted to pass
the optional argument, we should keep the check disabled. There are about a
1000 or so users and it is not possible to pass in the optional lockdep
expression in a single series since it is done on a case-by-case basis. I did
convert a few users in this series itself.

v2->v3: Simplified rcu-sync logic after rebase (Paul)
Added check for bh_map (Paul)
Refactored out more of the common code (Joel)
Added Oleg ack to rcu-sync patch.

v1->v2: Have assert_rcu_or_wq_mutex deleted (Daniel Jordan)
Simplify rcu_read_lock_any_held()   (Peter Zijlstra)
Simplified rcu-sync logic   (Oleg Nesterov)
Updated documentation and rculist comments.
Added GregKH ack.

RFC->v1: 
Simplify list checking macro (Rasmus Villemoes)

Joel Fernandes (Google) (9):
rcu/update: Remove useless check for debug_locks (v1)
rcu: Add support for consolidated-RCU reader checking (v3)
rcu/sync: Remove custom check for reader-section (v2)
ipv4: add lockdep condition to fix for_each_entry (v1)
driver/core: Convert to use built-in RCU list checking (v1)
workqueue: Convert for_each_wq to use built-in list check (v2)
x86/pci: Pass lockdep condition to pcm_mmcfg_list iterator (v1)
acpi: Use built-in RCU list checking for acpi_ioremaps list (v1)
doc: Update documentation about list_for_each_entry_rcu (v1)

Documentation/RCU/lockdep.txt   | 15 ---
Documentation/RCU/whatisRCU.txt |  9 ++-
arch/x86/pci/mmconfig-shared.c  |  5 ++--
drivers/acpi/osl.c  |  6 +++--
drivers/base/base.h |  1 +
drivers/base/core.c | 10 +++
drivers/base/power/runtime.c| 15 +++
include/linux/rcu_sync.h|  4 +--
include/linux/rculist.h | 28 +++
include/linux/rcupdate.h|  7 +
kernel/rcu/Kconfig.debug| 11 
kernel/rcu/update.c | 48 

[PATCH 1/9] rcu/update: Remove useless check for debug_locks (v1)

2019-07-15 Thread Joel Fernandes (Google)
In rcu_read_lock_sched_held(), debug_locks can never be true at the
point we check it because we already check debug_locks in
debug_lockdep_rcu_enabled() in the beginning. Remove the check.

Signed-off-by: Joel Fernandes (Google) 
---
 kernel/rcu/update.c | 6 +-
 1 file changed, 1 insertion(+), 5 deletions(-)

diff --git a/kernel/rcu/update.c b/kernel/rcu/update.c
index 61df2bf08563..9dd5aeef6e70 100644
--- a/kernel/rcu/update.c
+++ b/kernel/rcu/update.c
@@ -93,17 +93,13 @@ module_param(rcu_normal_after_boot, int, 0);
  */
 int rcu_read_lock_sched_held(void)
 {
-   int lockdep_opinion = 0;
-
if (!debug_lockdep_rcu_enabled())
return 1;
if (!rcu_is_watching())
return 0;
if (!rcu_lockdep_current_cpu_online())
return 0;
-   if (debug_locks)
-   lockdep_opinion = lock_is_held(&rcu_sched_lock_map);
-   return lockdep_opinion || !preemptible();
+   return lock_is_held(&rcu_sched_lock_map) || !preemptible();
 }
 EXPORT_SYMBOL(rcu_read_lock_sched_held);
 #endif
-- 
2.22.0.510.g264f2c817a-goog



[PATCH AUTOSEL 4.19 075/158] sched/fair: Fix "runnable_avg_yN_inv" not used warnings

2019-07-15 Thread Sasha Levin
From: Qian Cai 

[ Upstream commit 509466b7d480bc5d22e90b9fbe6122ae0e2fbe39 ]

runnable_avg_yN_inv[] is only used in kernel/sched/pelt.c but was
included in several other places because they need other macros all
came from kernel/sched/sched-pelt.h which was generated by
Documentation/scheduler/sched-pelt. As the result, it causes compilation
a lot of warnings,

  kernel/sched/sched-pelt.h:4:18: warning: 'runnable_avg_yN_inv' defined but 
not used [-Wunused-const-variable=]
  kernel/sched/sched-pelt.h:4:18: warning: 'runnable_avg_yN_inv' defined but 
not used [-Wunused-const-variable=]
  kernel/sched/sched-pelt.h:4:18: warning: 'runnable_avg_yN_inv' defined but 
not used [-Wunused-const-variable=]
  ...

Silence it by appending the __maybe_unused attribute for it, so all
generated variables and macros can still be kept in the same file.

Signed-off-by: Qian Cai 
Signed-off-by: Peter Zijlstra (Intel) 
Cc: Linus Torvalds 
Cc: Peter Zijlstra 
Cc: Thomas Gleixner 
Link: https://lkml.kernel.org/r/1559596304-31581-1-git-send-email-...@lca.pw
Signed-off-by: Ingo Molnar 
Signed-off-by: Sasha Levin 
---
 Documentation/scheduler/sched-pelt.c | 3 ++-
 kernel/sched/sched-pelt.h| 2 +-
 2 files changed, 3 insertions(+), 2 deletions(-)

diff --git a/Documentation/scheduler/sched-pelt.c 
b/Documentation/scheduler/sched-pelt.c
index e4219139386a..7238b355919c 100644
--- a/Documentation/scheduler/sched-pelt.c
+++ b/Documentation/scheduler/sched-pelt.c
@@ -20,7 +20,8 @@ void calc_runnable_avg_yN_inv(void)
int i;
unsigned int x;
 
-   printf("static const u32 runnable_avg_yN_inv[] = {");
+   /* To silence -Wunused-but-set-variable warnings. */
+   printf("static const u32 runnable_avg_yN_inv[] __maybe_unused = {");
for (i = 0; i < HALFLIFE; i++) {
x = ((1UL<<32)-1)*pow(y, i);
 
diff --git a/kernel/sched/sched-pelt.h b/kernel/sched/sched-pelt.h
index a26473674fb7..c529706bed11 100644
--- a/kernel/sched/sched-pelt.h
+++ b/kernel/sched/sched-pelt.h
@@ -1,7 +1,7 @@
 /* SPDX-License-Identifier: GPL-2.0 */
 /* Generated by Documentation/scheduler/sched-pelt; do not modify. */
 
-static const u32 runnable_avg_yN_inv[] = {
+static const u32 runnable_avg_yN_inv[] __maybe_unused = {
0x, 0xfa83b2da, 0xf5257d14, 0xefe4b99a, 0xeac0c6e6, 0xe5b906e6,
0xe0ccdeeb, 0xdbfbb796, 0xd744fcc9, 0xd2a81d91, 0xce248c14, 0xc9b9bd85,
0xc5672a10, 0xc12c4cc9, 0xbd08a39e, 0xb8fbaf46, 0xb504f333, 0xb123f581,
-- 
2.20.1



[PATCH AUTOSEL 5.1 099/219] sched/fair: Fix "runnable_avg_yN_inv" not used warnings

2019-07-15 Thread Sasha Levin
From: Qian Cai 

[ Upstream commit 509466b7d480bc5d22e90b9fbe6122ae0e2fbe39 ]

runnable_avg_yN_inv[] is only used in kernel/sched/pelt.c but was
included in several other places because they need other macros all
came from kernel/sched/sched-pelt.h which was generated by
Documentation/scheduler/sched-pelt. As the result, it causes compilation
a lot of warnings,

  kernel/sched/sched-pelt.h:4:18: warning: 'runnable_avg_yN_inv' defined but 
not used [-Wunused-const-variable=]
  kernel/sched/sched-pelt.h:4:18: warning: 'runnable_avg_yN_inv' defined but 
not used [-Wunused-const-variable=]
  kernel/sched/sched-pelt.h:4:18: warning: 'runnable_avg_yN_inv' defined but 
not used [-Wunused-const-variable=]
  ...

Silence it by appending the __maybe_unused attribute for it, so all
generated variables and macros can still be kept in the same file.

Signed-off-by: Qian Cai 
Signed-off-by: Peter Zijlstra (Intel) 
Cc: Linus Torvalds 
Cc: Peter Zijlstra 
Cc: Thomas Gleixner 
Link: https://lkml.kernel.org/r/1559596304-31581-1-git-send-email-...@lca.pw
Signed-off-by: Ingo Molnar 
Signed-off-by: Sasha Levin 
---
 Documentation/scheduler/sched-pelt.c | 3 ++-
 kernel/sched/sched-pelt.h| 2 +-
 2 files changed, 3 insertions(+), 2 deletions(-)

diff --git a/Documentation/scheduler/sched-pelt.c 
b/Documentation/scheduler/sched-pelt.c
index e4219139386a..7238b355919c 100644
--- a/Documentation/scheduler/sched-pelt.c
+++ b/Documentation/scheduler/sched-pelt.c
@@ -20,7 +20,8 @@ void calc_runnable_avg_yN_inv(void)
int i;
unsigned int x;
 
-   printf("static const u32 runnable_avg_yN_inv[] = {");
+   /* To silence -Wunused-but-set-variable warnings. */
+   printf("static const u32 runnable_avg_yN_inv[] __maybe_unused = {");
for (i = 0; i < HALFLIFE; i++) {
x = ((1UL<<32)-1)*pow(y, i);
 
diff --git a/kernel/sched/sched-pelt.h b/kernel/sched/sched-pelt.h
index a26473674fb7..c529706bed11 100644
--- a/kernel/sched/sched-pelt.h
+++ b/kernel/sched/sched-pelt.h
@@ -1,7 +1,7 @@
 /* SPDX-License-Identifier: GPL-2.0 */
 /* Generated by Documentation/scheduler/sched-pelt; do not modify. */
 
-static const u32 runnable_avg_yN_inv[] = {
+static const u32 runnable_avg_yN_inv[] __maybe_unused = {
0x, 0xfa83b2da, 0xf5257d14, 0xefe4b99a, 0xeac0c6e6, 0xe5b906e6,
0xe0ccdeeb, 0xdbfbb796, 0xd744fcc9, 0xd2a81d91, 0xce248c14, 0xc9b9bd85,
0xc5672a10, 0xc12c4cc9, 0xbd08a39e, 0xb8fbaf46, 0xb504f333, 0xb123f581,
-- 
2.20.1



Re: [PATCH v8] Documentation: filesystem: Convert xfs.txt to ReST

2019-07-15 Thread Matthew Wilcox
On Sun, Jul 14, 2019 at 01:58:31PM +0100, Sheriff Esseson wrote:
> Move xfs.txt to admin-guide, convert xfs.txt to ReST and broken references
> 
> Signed-off-by: Sheriff Esseson 

Reviewed-by: Matthew Wilcox (Oracle) 


Re: [PATCH v5 1/1] sched/fair: Fix low cpu usage with high throttling by removing expiration of cpu-local slices

2019-07-15 Thread Dave Chiluk
On Fri, Jul 12, 2019 at 5:10 PM  wrote:
> Ugh. Maybe we /do/ just give up and say that most people don't seem to
> be using cfs_b in a way that expiration of the leftover 1ms matters.

That was my conclusion as well.  Does this mean you want to proceed
with my patch set?  Do you have any changes you want made to the
proposed documentation changes, or any other changes for that matter?


Re: [PATCH v8] Documentation: filesystem: Convert xfs.txt to ReST

2019-07-15 Thread Darrick J. Wong
On Sun, Jul 14, 2019 at 01:58:31PM +0100, Sheriff Esseson wrote:
> Move xfs.txt to admin-guide, convert xfs.txt to ReST and broken references
> 
> Signed-off-by: Sheriff Esseson 

Looks ok, will pull through the XFS tree.  Thanks for the submission!
Reviewed-by: Darrick J. Wong 

--D

> ---
> 
> changes in v8:
>   - fix table of Deprecated and Removed options.
> 
>  Documentation/admin-guide/index.rst   |   1 +
>  .../xfs.txt => admin-guide/xfs.rst}   | 132 +-
>  Documentation/filesystems/dax.txt |   2 +-
>  MAINTAINERS   |   2 +-
>  4 files changed, 67 insertions(+), 70 deletions(-)
>  rename Documentation/{filesystems/xfs.txt => admin-guide/xfs.rst} (80%)
> 
> diff --git a/Documentation/admin-guide/index.rst 
> b/Documentation/admin-guide/index.rst
> index 24fbe0568eff..0615ea3a744c 100644
> --- a/Documentation/admin-guide/index.rst
> +++ b/Documentation/admin-guide/index.rst
> @@ -70,6 +70,7 @@ configure specific aspects of kernel behavior to your 
> liking.
> ras
> bcache
> ext4
> +   xfs
> binderfs
> pm/index
> thunderbolt
> diff --git a/Documentation/filesystems/xfs.txt 
> b/Documentation/admin-guide/xfs.rst
> similarity index 80%
> rename from Documentation/filesystems/xfs.txt
> rename to Documentation/admin-guide/xfs.rst
> index a5cbb5e0e3db..e76665a8f2f2 100644
> --- a/Documentation/filesystems/xfs.txt
> +++ b/Documentation/admin-guide/xfs.rst
> @@ -1,4 +1,6 @@
> +.. SPDX-License-Identifier: GPL-2.0
>  
> +==
>  The SGI XFS Filesystem
>  ==
>  
> @@ -18,8 +20,6 @@ Mount Options
>  =
>  
>  When mounting an XFS filesystem, the following options are accepted.
> -For boolean mount options, the names with the (*) suffix is the
> -default behaviour.
>  
>allocsize=size
>   Sets the buffered I/O end-of-file preallocation size when
> @@ -31,46 +31,43 @@ default behaviour.
>   preallocation size, which uses a set of heuristics to
>   optimise the preallocation size based on the current
>   allocation patterns within the file and the access patterns
> - to the file. Specifying a fixed allocsize value turns off
> + to the file. Specifying a fixed ``allocsize`` value turns off
>   the dynamic behaviour.
>  
> -  attr2
> -  noattr2
> +  attr2 or noattr2
>   The options enable/disable an "opportunistic" improvement to
>   be made in the way inline extended attributes are stored
>   on-disk.  When the new form is used for the first time when
> - attr2 is selected (either when setting or removing extended
> + ``attr2`` is selected (either when setting or removing extended
>   attributes) the on-disk superblock feature bit field will be
>   updated to reflect this format being in use.
>  
>   The default behaviour is determined by the on-disk feature
> - bit indicating that attr2 behaviour is active. If either
> - mount option it set, then that becomes the new default used
> + bit indicating that ``attr2`` behaviour is active. If either
> + mount option is set, then that becomes the new default used
>   by the filesystem.
>  
> - CRC enabled filesystems always use the attr2 format, and so
> - will reject the noattr2 mount option if it is set.
> + CRC enabled filesystems always use the ``attr2`` format, and so
> + will reject the ``noattr2`` mount option if it is set.
>  
> -  discard
> -  nodiscard (*)
> +  discard or nodiscard (default)
>   Enable/disable the issuing of commands to let the block
>   device reclaim space freed by the filesystem.  This is
>   useful for SSD devices, thinly provisioned LUNs and virtual
>   machine images, but may have a performance impact.
>  
> - Note: It is currently recommended that you use the fstrim
> - application to discard unused blocks rather than the discard
> + Note: It is currently recommended that you use the ``fstrim``
> + application to ``discard`` unused blocks rather than the ``discard``
>   mount option because the performance impact of this option
>   is quite severe.
>  
> -  grpid/bsdgroups
> -  nogrpid/sysvgroups (*)
> +  grpid/bsdgroups or nogrpid/sysvgroups (default)
>   These options define what group ID a newly created file
> - gets.  When grpid is set, it takes the group ID of the
> + gets.  When ``grpid`` is set, it takes the group ID of the
>   directory in which it is created; otherwise it takes the
> - fsgid of the current process, unless the directory has the
> - setgid bit set, in which case it takes the gid from the
> - parent directory, and also gets the setgid bit set if it is
> + ``fsgid`` of the current process, unless the directory has the
> + ``setgid`` bit set, in which case it takes the ``gid`` from the
> + parent directory, and also gets the ``setgid`` bit set if it is
>   a directory itself.
>  
>file

Re: [PATCH 7/9] x86/pci: Pass lockdep condition to pcm_mmcfg_list iterator (v1)

2019-07-15 Thread Bjorn Helgaas
On Mon, Jul 15, 2019 at 10:37:03AM -0400, Joel Fernandes (Google) wrote:
> The pcm_mmcfg_list is traversed with list_for_each_entry_rcu without a
> reader-lock held, because the pci_mmcfg_lock is already held. Make this
> known to the list macro so that it fixes new lockdep warnings that
> trigger due to lockdep checks added to list_for_each_entry_rcu().
> 
> Signed-off-by: Joel Fernandes (Google) 

Ingo takes care of most patches to this file, but FWIW,

Acked-by: Bjorn Helgaas 

I would personally prefer if you capitalized the subject to match the
"x86/PCI:" convention that's used fairly consistently in
arch/x86/pci/.

Also, I didn't apply this to be sure, but it looks like this might
make a line or two wider than 80 columns, which I would rewrap if I
were applying this.

> ---
>  arch/x86/pci/mmconfig-shared.c | 5 +++--
>  1 file changed, 3 insertions(+), 2 deletions(-)
> 
> diff --git a/arch/x86/pci/mmconfig-shared.c b/arch/x86/pci/mmconfig-shared.c
> index 7389db538c30..6fa42e9c4e6f 100644
> --- a/arch/x86/pci/mmconfig-shared.c
> +++ b/arch/x86/pci/mmconfig-shared.c
> @@ -29,6 +29,7 @@
>  static bool pci_mmcfg_running_state;
>  static bool pci_mmcfg_arch_init_failed;
>  static DEFINE_MUTEX(pci_mmcfg_lock);
> +#define pci_mmcfg_lock_held() lock_is_held(&(pci_mmcfg_lock).dep_map)
>  
>  LIST_HEAD(pci_mmcfg_list);
>  
> @@ -54,7 +55,7 @@ static void list_add_sorted(struct pci_mmcfg_region *new)
>   struct pci_mmcfg_region *cfg;
>  
>   /* keep list sorted by segment and starting bus number */
> - list_for_each_entry_rcu(cfg, &pci_mmcfg_list, list) {
> + list_for_each_entry_rcu(cfg, &pci_mmcfg_list, list, 
> pci_mmcfg_lock_held()) {
>   if (cfg->segment > new->segment ||
>   (cfg->segment == new->segment &&
>cfg->start_bus >= new->start_bus)) {
> @@ -118,7 +119,7 @@ struct pci_mmcfg_region *pci_mmconfig_lookup(int segment, 
> int bus)
>  {
>   struct pci_mmcfg_region *cfg;
>  
> - list_for_each_entry_rcu(cfg, &pci_mmcfg_list, list)
> + list_for_each_entry_rcu(cfg, &pci_mmcfg_list, list, 
> pci_mmcfg_lock_held())
>   if (cfg->segment == segment &&
>   cfg->start_bus <= bus && bus <= cfg->end_bus)
>   return cfg;
> -- 
> 2.22.0.510.g264f2c817a-goog
> 


Re: [PATCH 8/9] acpi: Use built-in RCU list checking for acpi_ioremaps list (v1)

2019-07-15 Thread Rafael J. Wysocki
On Mon, Jul 15, 2019 at 4:43 PM Joel Fernandes (Google)
 wrote:
>
> list_for_each_entry_rcu has built-in RCU and lock checking. Make use of
> it for acpi_ioremaps list traversal.
>
> Signed-off-by: Joel Fernandes (Google) 

Acked-by: Rafael J. Wysocki 

> ---
>  drivers/acpi/osl.c | 6 --
>  1 file changed, 4 insertions(+), 2 deletions(-)
>
> diff --git a/drivers/acpi/osl.c b/drivers/acpi/osl.c
> index 9c0edf2fc0dd..2f9d0d20b836 100644
> --- a/drivers/acpi/osl.c
> +++ b/drivers/acpi/osl.c
> @@ -14,6 +14,7 @@
>  #include 
>  #include 
>  #include 
> +#include 
>  #include 
>  #include 
>  #include 
> @@ -80,6 +81,7 @@ struct acpi_ioremap {
>
>  static LIST_HEAD(acpi_ioremaps);
>  static DEFINE_MUTEX(acpi_ioremap_lock);
> +#define acpi_ioremap_lock_held() lock_is_held(&acpi_ioremap_lock.dep_map)
>
>  static void __init acpi_request_region (struct acpi_generic_address *gas,
> unsigned int length, char *desc)
> @@ -206,7 +208,7 @@ acpi_map_lookup(acpi_physical_address phys, acpi_size 
> size)
>  {
> struct acpi_ioremap *map;
>
> -   list_for_each_entry_rcu(map, &acpi_ioremaps, list)
> +   list_for_each_entry_rcu(map, &acpi_ioremaps, list, 
> acpi_ioremap_lock_held())
> if (map->phys <= phys &&
> phys + size <= map->phys + map->size)
> return map;
> @@ -249,7 +251,7 @@ acpi_map_lookup_virt(void __iomem *virt, acpi_size size)
>  {
> struct acpi_ioremap *map;
>
> -   list_for_each_entry_rcu(map, &acpi_ioremaps, list)
> +   list_for_each_entry_rcu(map, &acpi_ioremaps, list, 
> acpi_ioremap_lock_held())
> if (map->virt <= virt &&
> virt + size <= map->virt + map->size)
> return map;
> --
> 2.22.0.510.g264f2c817a-goog
>


Re: [PATCH v8] Documentation: filesystem: Convert xfs.txt to ReST

2019-07-15 Thread Sheriff Esseson
On Sun, Jul 14, 2019 at 01:58:31PM +0100, Sheriff Esseson wrote:
> Move xfs.txt to admin-guide, convert xfs.txt to ReST and broken references
> 
> Signed-off-by: Sheriff Esseson 

>Reviewed-by: Matthew Wilcox (Oracle) 

Sorry, I missed something. Will fix in v9.


Re: [PATCH v8] Documentation: filesystem: Convert xfs.txt to ReST

2019-07-15 Thread Sheriff Esseson
On Sun, Jul 14, 2019 at 01:58:31PM +0100, Sheriff Esseson wrote:
>> Move xfs.txt to admin-guide, convert xfs.txt to ReST and broken references
>> 
>> Signed-off-by: Sheriff Esseson 

>Looks ok, will pull through the XFS tree.  Thanks for the submission!
>Reviewed-by: Darrick J. Wong 

>--D

Sorry, missed another table. Fix in v9.


Re: [PATCH v9] Documentation: filesystem: Convert xfs.txt to ReST

2019-07-15 Thread Sheriff Esseson
Move xfs.txt to admin-guide, convert to ReST and fix broken references.

Signed-off-by: Sheriff Esseson 
---

Changes in v9:
- fix table for "Removed Sysctls".
- "Deprecated Mount Options", just like "Deprecated Sysctls",
  currently needs no table - remove table.  

 Documentation/admin-guide/index.rst   |   1 +
 .../xfs.txt => admin-guide/xfs.rst}   | 136 +-
 Documentation/filesystems/dax.txt |   2 +-
 MAINTAINERS   |   2 +-
 4 files changed, 68 insertions(+), 73 deletions(-)
 rename Documentation/{filesystems/xfs.txt => admin-guide/xfs.rst} (80%)

diff --git a/Documentation/admin-guide/index.rst 
b/Documentation/admin-guide/index.rst
index 24fbe0568eff..0615ea3a744c 100644
--- a/Documentation/admin-guide/index.rst
+++ b/Documentation/admin-guide/index.rst
@@ -70,6 +70,7 @@ configure specific aspects of kernel behavior to your liking.
ras
bcache
ext4
+   xfs
binderfs
pm/index
thunderbolt
diff --git a/Documentation/filesystems/xfs.txt 
b/Documentation/admin-guide/xfs.rst
similarity index 80%
rename from Documentation/filesystems/xfs.txt
rename to Documentation/admin-guide/xfs.rst
index a5cbb5e0e3db..e1b412a3dd29 100644
--- a/Documentation/filesystems/xfs.txt
+++ b/Documentation/admin-guide/xfs.rst
@@ -1,4 +1,6 @@
+.. SPDX-License-Identifier: GPL-2.0
 
+==
 The SGI XFS Filesystem
 ==
 
@@ -18,8 +20,6 @@ Mount Options
 =
 
 When mounting an XFS filesystem, the following options are accepted.
-For boolean mount options, the names with the (*) suffix is the
-default behaviour.
 
   allocsize=size
Sets the buffered I/O end-of-file preallocation size when
@@ -31,46 +31,43 @@ default behaviour.
preallocation size, which uses a set of heuristics to
optimise the preallocation size based on the current
allocation patterns within the file and the access patterns
-   to the file. Specifying a fixed allocsize value turns off
+   to the file. Specifying a fixed ``allocsize`` value turns off
the dynamic behaviour.
 
-  attr2
-  noattr2
+  attr2 or noattr2
The options enable/disable an "opportunistic" improvement to
be made in the way inline extended attributes are stored
on-disk.  When the new form is used for the first time when
-   attr2 is selected (either when setting or removing extended
+   ``attr2`` is selected (either when setting or removing extended
attributes) the on-disk superblock feature bit field will be
updated to reflect this format being in use.
 
The default behaviour is determined by the on-disk feature
-   bit indicating that attr2 behaviour is active. If either
-   mount option it set, then that becomes the new default used
+   bit indicating that ``attr2`` behaviour is active. If either
+   mount option is set, then that becomes the new default used
by the filesystem.
 
-   CRC enabled filesystems always use the attr2 format, and so
-   will reject the noattr2 mount option if it is set.
+   CRC enabled filesystems always use the ``attr2`` format, and so
+   will reject the ``noattr2`` mount option if it is set.
 
-  discard
-  nodiscard (*)
+  discard or nodiscard (default)
Enable/disable the issuing of commands to let the block
device reclaim space freed by the filesystem.  This is
useful for SSD devices, thinly provisioned LUNs and virtual
machine images, but may have a performance impact.
 
-   Note: It is currently recommended that you use the fstrim
-   application to discard unused blocks rather than the discard
+   Note: It is currently recommended that you use the ``fstrim``
+   application to ``discard`` unused blocks rather than the ``discard``
mount option because the performance impact of this option
is quite severe.
 
-  grpid/bsdgroups
-  nogrpid/sysvgroups (*)
+  grpid/bsdgroups or nogrpid/sysvgroups (default)
These options define what group ID a newly created file
-   gets.  When grpid is set, it takes the group ID of the
+   gets.  When ``grpid`` is set, it takes the group ID of the
directory in which it is created; otherwise it takes the
-   fsgid of the current process, unless the directory has the
-   setgid bit set, in which case it takes the gid from the
-   parent directory, and also gets the setgid bit set if it is
+   ``fsgid`` of the current process, unless the directory has the
+   ``setgid`` bit set, in which case it takes the ``gid`` from the
+   parent directory, and also gets the ``setgid`` bit set if it is
a directory itself.
 
   filestreams
@@ -78,46 +75,42 @@ default behaviour.
across the entire filesystem rather than just on directories
configured to use it.
 
-  ikeep
-  noikeep (*)
-   When ikeep is speci

Re: [PATCH 7/9] x86/pci: Pass lockdep condition to pcm_mmcfg_list iterator (v1)

2019-07-15 Thread Joel Fernandes
On Mon, Jul 15, 2019 at 03:02:35PM -0500, Bjorn Helgaas wrote:
> On Mon, Jul 15, 2019 at 10:37:03AM -0400, Joel Fernandes (Google) wrote:
> > The pcm_mmcfg_list is traversed with list_for_each_entry_rcu without a
> > reader-lock held, because the pci_mmcfg_lock is already held. Make this
> > known to the list macro so that it fixes new lockdep warnings that
> > trigger due to lockdep checks added to list_for_each_entry_rcu().
> > 
> > Signed-off-by: Joel Fernandes (Google) 
> 
> Ingo takes care of most patches to this file, but FWIW,
> 
> Acked-by: Bjorn Helgaas 

Thanks.

> I would personally prefer if you capitalized the subject to match the
> "x86/PCI:" convention that's used fairly consistently in
> arch/x86/pci/.
> 
> Also, I didn't apply this to be sure, but it looks like this might
> make a line or two wider than 80 columns, which I would rewrap if I
> were applying this.

Updated below is the patch with the nits corrected:

---8<---

>From 73fab09d7e33ca2110c24215f8ed428c12625dbe Mon Sep 17 00:00:00 2001
From: "Joel Fernandes (Google)" 
Date: Sat, 1 Jun 2019 15:05:49 -0400
Subject: [PATCH] x86/PCI: Pass lockdep condition to pcm_mmcfg_list iterator
 (v1)

The pcm_mmcfg_list is traversed with list_for_each_entry_rcu without a
reader-lock held, because the pci_mmcfg_lock is already held. Make this
known to the list macro so that it fixes new lockdep warnings that
trigger due to lockdep checks added to list_for_each_entry_rcu().

Acked-by: Bjorn Helgaas 
Signed-off-by: Joel Fernandes (Google) 
---
 arch/x86/pci/mmconfig-shared.c | 7 +--
 1 file changed, 5 insertions(+), 2 deletions(-)

diff --git a/arch/x86/pci/mmconfig-shared.c b/arch/x86/pci/mmconfig-shared.c
index 7389db538c30..9e3250ec5a37 100644
--- a/arch/x86/pci/mmconfig-shared.c
+++ b/arch/x86/pci/mmconfig-shared.c
@@ -29,6 +29,7 @@
 static bool pci_mmcfg_running_state;
 static bool pci_mmcfg_arch_init_failed;
 static DEFINE_MUTEX(pci_mmcfg_lock);
+#define pci_mmcfg_lock_held() lock_is_held(&(pci_mmcfg_lock).dep_map)
 
 LIST_HEAD(pci_mmcfg_list);
 
@@ -54,7 +55,8 @@ static void list_add_sorted(struct pci_mmcfg_region *new)
struct pci_mmcfg_region *cfg;
 
/* keep list sorted by segment and starting bus number */
-   list_for_each_entry_rcu(cfg, &pci_mmcfg_list, list) {
+   list_for_each_entry_rcu(cfg, &pci_mmcfg_list, list,
+   pci_mmcfg_lock_held()) {
if (cfg->segment > new->segment ||
(cfg->segment == new->segment &&
 cfg->start_bus >= new->start_bus)) {
@@ -118,7 +120,8 @@ struct pci_mmcfg_region *pci_mmconfig_lookup(int segment, 
int bus)
 {
struct pci_mmcfg_region *cfg;
 
-   list_for_each_entry_rcu(cfg, &pci_mmcfg_list, list)
+   list_for_each_entry_rcu(cfg, &pci_mmcfg_list, list
+   pci_mmcfg_lock_held())
if (cfg->segment == segment &&
cfg->start_bus <= bus && bus <= cfg->end_bus)
return cfg;
-- 
2.22.0.510.g264f2c817a-goog



[PATCH] rculist: Add build check for single optional list argument

2019-07-15 Thread Joel Fernandes (Google)
In a previous patch series [1], we added an optional lockdep expression
argument to list_for_each_entry_rcu() and the hlist equivalent. This
also meant more than one optional argument can be passed to them with
that error going unnoticed. To fix this, let us force a compiler error
more than one optional argument is passed.

[1] https://lore.kernel.org/patchwork/project/lkml/list/?series=402150

Suggested-by: Paul McKenney 
Signed-off-by: Joel Fernandes (Google) 
---
 include/linux/rculist.h | 8 ++--
 1 file changed, 6 insertions(+), 2 deletions(-)

diff --git a/include/linux/rculist.h b/include/linux/rculist.h
index 1048160625bb..86659f6d72dc 100644
--- a/include/linux/rculist.h
+++ b/include/linux/rculist.h
@@ -44,14 +44,18 @@ static inline void INIT_LIST_HEAD_RCU(struct list_head 
*list)
  * Check during list traversal that we are within an RCU reader
  */
 
+#define check_arg_count_one(dummy)
+
 #ifdef CONFIG_PROVE_RCU_LIST
-#define __list_check_rcu(dummy, cond, ...) \
+#define __list_check_rcu(dummy, cond, extra...)
\
({  \
+   check_arg_count_one(extra); \
RCU_LOCKDEP_WARN(!cond && !rcu_read_lock_any_held(),\
 "RCU-list traversed in non-reader section!");  \
 })
 #else
-#define __list_check_rcu(dummy, cond, ...) ({})
+#define __list_check_rcu(dummy, cond, extra...)
\
+   ({ check_arg_count_one(extra); })
 #endif
 
 /*
-- 
2.22.0.510.g264f2c817a-goog