Le 28/07/2020 à 19:37, Nathan Lynch a écrit :
The drmem lmb list can have hundreds of thousands of entries, and
unfortunately lookups take the form of linear searches. As long as
this is the case, traversals have the potential to monopolize the CPU
and provoke lockup reports, workqueue stalls, and the like unless
they explicitly yield.

Rather than placing cond_resched() calls within various
for_each_drmem_lmb() loop blocks in the code, put it in the iteration
expression of the loop macro itself so users can't omit it.

Hi Nathan,

Is that not too much to call cond_resched() on every LMB?

Could that be less frequent, every 10, or 100, I don't really know ?

Cheers,
Laurent.

Fixes: 6c6ea53725b3 ("powerpc/mm: Separate ibm, dynamic-memory data from DT 
format")
Signed-off-by: Nathan Lynch <nath...@linux.ibm.com>
---
  arch/powerpc/include/asm/drmem.h | 10 +++++++++-
  1 file changed, 9 insertions(+), 1 deletion(-)

diff --git a/arch/powerpc/include/asm/drmem.h b/arch/powerpc/include/asm/drmem.h
index 414d209f45bb..36d0ed04bda8 100644
--- a/arch/powerpc/include/asm/drmem.h
+++ b/arch/powerpc/include/asm/drmem.h
@@ -8,6 +8,8 @@
  #ifndef _ASM_POWERPC_LMB_H
  #define _ASM_POWERPC_LMB_H
+#include <linux/sched.h>
+
  struct drmem_lmb {
        u64     base_addr;
        u32     drc_index;
@@ -26,8 +28,14 @@ struct drmem_lmb_info {
extern struct drmem_lmb_info *drmem_info; +static inline struct drmem_lmb *drmem_lmb_next(struct drmem_lmb *lmb)
+{
+       cond_resched();
+       return ++lmb;
+}
+
  #define for_each_drmem_lmb_in_range(lmb, start, end)          \
-       for ((lmb) = (start); (lmb) < (end); (lmb)++)
+       for ((lmb) = (start); (lmb) < (end); lmb = drmem_lmb_next(lmb))
#define for_each_drmem_lmb(lmb) \
        for_each_drmem_lmb_in_range((lmb),                      \


Reply via email to