[Let's CC Nick who has written this code] On Wed 12-10-16 22:30:13, zijun_hu wrote: > From: zijun_hu <zijun...@htc.com> > > the KVA allocator organizes vmap_areas allocated by rbtree. in order to > insert a new vmap_area @i_va into the rbtree, walk around the rbtree from > root and compare the vmap_area @t_va met on the rbtree against @i_va; walk > toward the left branch of @t_va if @i_va is lower than @t_va, and right > branch if higher, otherwise handle this error case since @i_va has overlay > with @t_va; however, __insert_vmap_area() don't follow the desired > procedure rightly, moreover, it includes a meaningless else if condition > and a redundant else branch as shown by comments in below code segments: > static void __insert_vmap_area(struct vmap_area *va) > { > as a internal interface parameter, we assume vmap_area @va has nonzero size > ... > if (va->va_start < tmp->va_end) > p = &(*p)->rb_left; > else if (va->va_end > tmp->va_start) > p = &(*p)->rb_right; > this else if condition is always true and meaningless due to > va->va_end > va->va_start >= tmp_va->va_end > tmp_va->va_start normally > else > BUG(); > this BUG() is meaningless too due to never be reached normally > ... > } > > it looks like the else if condition and else branch are canceled. no errors > are caused since the vmap_area @va to insert as a internal interface > parameter doesn't have overlay with any one on the rbtree normally. however > __insert_vmap_area() looks weird and really has several logic errors as > pointed out above when it is viewed as a separate function.
I have tried to read this several times but I am completely lost to understand what the actual bug is and how it causes vmap_area sorting to misbehave. So is this a correctness issue, performance improvement or theoretical fix for an incorrect input? > fix by walking around vmap_area rbtree as described above to insert > a vmap_area. > > BTW, (va->va_end == tmp_va->va_start) is consider as legal case since it > indicates vmap_area @va left neighbors with @tmp_va tightly. > > Fixes: db64fe02258f ("mm: rewrite vmap layer") > Signed-off-by: zijun_hu <zijun...@htc.com> > --- > mm/vmalloc.c | 8 ++++---- > 1 file changed, 4 insertions(+), 4 deletions(-) > > diff --git a/mm/vmalloc.c b/mm/vmalloc.c > index 5daf3211b84f..8b80931654b7 100644 > --- a/mm/vmalloc.c > +++ b/mm/vmalloc.c > @@ -321,10 +321,10 @@ static void __insert_vmap_area(struct vmap_area *va) > > parent = *p; > tmp_va = rb_entry(parent, struct vmap_area, rb_node); > - if (va->va_start < tmp_va->va_end) > - p = &(*p)->rb_left; > - else if (va->va_end > tmp_va->va_start) > - p = &(*p)->rb_right; > + if (va->va_end <= tmp_va->va_start) > + p = &parent->rb_left; > + else if (va->va_start >= tmp_va->va_end) > + p = &parent->rb_right; > else > BUG(); > } > -- > 1.9.1 -- Michal Hocko SUSE Labs