On 16.09.19 08:05, Alastair D'Silva wrote:
> From: Alastair D'Silva <alast...@d-silva.org>
> 
> The call to check_hotplug_memory_addressable() validates that the memory
> is fully addressable.
> 
> Without this call, it is possible that we may remap pages that is
> not physically addressable, resulting in bogus section numbers
> being returned from __section_nr().
> 
> Signed-off-by: Alastair D'Silva <alast...@d-silva.org>
> ---
>  mm/memremap.c | 8 ++++++++
>  1 file changed, 8 insertions(+)
> 
> diff --git a/mm/memremap.c b/mm/memremap.c
> index 86432650f829..fd00993caa3e 100644
> --- a/mm/memremap.c
> +++ b/mm/memremap.c
> @@ -269,6 +269,13 @@ void *devm_memremap_pages(struct device *dev, struct 
> dev_pagemap *pgmap)
>  
>       mem_hotplug_begin();
>  
> +     error = check_hotplug_memory_addressable(res->start,
> +                                              resource_size(res));
> +     if (error) {
> +             mem_hotplug_done();
> +             goto err_checkrange;
> +     }

As I said in reply to v1, please move this out of the memory hotplug
lock. These are static checks.

> +
>       /*
>        * For device private memory we call add_pages() as we only need to
>        * allocate and initialize struct page for the device memory. More-
> @@ -324,6 +331,7 @@ void *devm_memremap_pages(struct device *dev, struct 
> dev_pagemap *pgmap)
>  
>   err_add_memory:
>       kasan_remove_zero_shadow(__va(res->start), resource_size(res));
> + err_checkrange:
>   err_kasan:
>       untrack_pfn(NULL, PHYS_PFN(res->start), resource_size(res));
>   err_pfn_remap:
> 


-- 

Thanks,

David / dhildenb

Reply via email to