On Tue, Sep 25, 2018 at 11:14:56AM +0200, David Hildenbrand wrote:
> Let's perform all checking + offlining + removing under
> device_hotplug_lock, so nobody can mess with these devices via
> sysfs concurrently.
> 
> Cc: Benjamin Herrenschmidt <b...@kernel.crashing.org>
> Cc: Paul Mackerras <pau...@samba.org>
> Cc: Michael Ellerman <m...@ellerman.id.au>
> Cc: Rashmica Gupta <rashmic...@gmail.com>
> Cc: Balbir Singh <bsinghar...@gmail.com>
> Cc: Michael Neuling <mi...@neuling.org>
> Reviewed-by: Pavel Tatashin <pavel.tatas...@microsoft.com>
> Reviewed-by: Rashmica Gupta <rashmic...@gmail.com>
> Signed-off-by: David Hildenbrand <da...@redhat.com>
> ---
>  arch/powerpc/platforms/powernv/memtrace.c | 4 +++-
>  1 file changed, 3 insertions(+), 1 deletion(-)
> 
> diff --git a/arch/powerpc/platforms/powernv/memtrace.c 
> b/arch/powerpc/platforms/powernv/memtrace.c
> index fdd48f1a39f7..d84d09c56af9 100644
> --- a/arch/powerpc/platforms/powernv/memtrace.c
> +++ b/arch/powerpc/platforms/powernv/memtrace.c
> @@ -70,6 +70,7 @@ static int change_memblock_state(struct memory_block *mem, 
> void *arg)
>       return 0;
>  }
>  
> +/* called with device_hotplug_lock held */
>  static bool memtrace_offline_pages(u32 nid, u64 start_pfn, u64 nr_pages)
>  {
>       u64 end_pfn = start_pfn + nr_pages - 1;
> @@ -111,6 +112,7 @@ static u64 memtrace_alloc_node(u32 nid, u64 size)
>       end_pfn = round_down(end_pfn - nr_pages, nr_pages);
>  
>       for (base_pfn = end_pfn; base_pfn > start_pfn; base_pfn -= nr_pages) {
> +             lock_device_hotplug();

Why not grab the lock before the for loop? That way we can avoid bad cases like 
a
large node being scanned for a small number of pages (nr_pages). Ideally we need
a cond_resched() in the loop, but I guess offline_pages() has one.

Acked-by: Balbir Singh <bsinghar...@gmail.com>

Reply via email to