On (02/17/16 09:56), YiPing Xu wrote:
> When unmapping a huge class page in zs_unmap_object, the page will
> be unmapped by kmap_atomic. the "!area->huge" branch in
> __zs_unmap_object is alway true, and no code set "area->huge" now,
> so we can drop it.
> 

the patch looks good to me, thanks.
Reviewed-by: Sergey Senozhatsky <sergey.senozhat...@gmail.com>

        -ss

> Signed-off-by: YiPing Xu <xuyip...@huawei.com>
> ---
>  mm/zsmalloc.c | 9 +++------
>  1 file changed, 3 insertions(+), 6 deletions(-)
> 
> diff --git a/mm/zsmalloc.c b/mm/zsmalloc.c
> index 2d7c4c1..43e4cbc 100644
> --- a/mm/zsmalloc.c
> +++ b/mm/zsmalloc.c
> @@ -281,7 +281,6 @@ struct mapping_area {
>  #endif
>       char *vm_addr; /* address of kmap_atomic()'ed pages */
>       enum zs_mapmode vm_mm; /* mapping mode */
> -     bool huge;
>  };
>  
>  static int create_handle_cache(struct zs_pool *pool)
> @@ -1127,11 +1126,9 @@ static void __zs_unmap_object(struct mapping_area 
> *area,
>               goto out;
>  
>       buf = area->vm_buf;
> -     if (!area->huge) {
> -             buf = buf + ZS_HANDLE_SIZE;
> -             size -= ZS_HANDLE_SIZE;
> -             off += ZS_HANDLE_SIZE;
> -     }
> +     buf = buf + ZS_HANDLE_SIZE;
> +     size -= ZS_HANDLE_SIZE;
> +     off += ZS_HANDLE_SIZE;
>  
>       sizes[0] = PAGE_SIZE - off;
>       sizes[1] = size - sizes[0];
> -- 
> 1.8.3.2
> 

Reply via email to