On Wed, Feb 17, 2016 at 09:56:39AM +0800, YiPing Xu wrote:
> When unmapping a huge class page in zs_unmap_object, the page will
> be unmapped by kmap_atomic. the "!area->huge" branch in
> __zs_unmap_object is alway true, and no code set "area->huge" now,
> so we can drop it.
>
> Signed-off-by: YiP
On (02/17/16 09:56), YiPing Xu wrote:
> When unmapping a huge class page in zs_unmap_object, the page will
> be unmapped by kmap_atomic. the "!area->huge" branch in
> __zs_unmap_object is alway true, and no code set "area->huge" now,
> so we can drop it.
>
the patch looks good to me, thanks.
Revi
On (02/17/16 11:29), xuyiping wrote:
[..]
>
> if (off + class->size <= PAGE_SIZE) {
>
> for huge object, the code will get into this branch, there is no more huge
> object process in __zs_map_object.
correct, well, techically, it's not about huge objects, but objects that span
page boundar
HI, Sergery
On 2016/2/17 10:26, Sergey Senozhatsky wrote:
Hello,
On (02/17/16 09:56), YiPing Xu wrote:
static int create_handle_cache(struct zs_pool *pool)
@@ -1127,11 +1126,9 @@ static void __zs_unmap_object(struct mapping_area *area,
goto out;
buf = area->vm_buf;
-
Hello,
On (02/17/16 09:56), YiPing Xu wrote:
> static int create_handle_cache(struct zs_pool *pool)
> @@ -1127,11 +1126,9 @@ static void __zs_unmap_object(struct mapping_area
> *area,
> goto out;
>
> buf = area->vm_buf;
> - if (!area->huge) {
> - buf = buf +
When unmapping a huge class page in zs_unmap_object, the page will
be unmapped by kmap_atomic. the "!area->huge" branch in
__zs_unmap_object is alway true, and no code set "area->huge" now,
so we can drop it.
Signed-off-by: YiPing Xu
---
mm/zsmalloc.c | 9 +++--
1 file changed, 3 insertions(
6 matches
Mail list logo