On Fri 23-09-16 15:56:36, Oleg Nesterov wrote:
> On 09/23, Robert Ho wrote:
> >
> > --- a/fs/proc/task_mmu.c
> > +++ b/fs/proc/task_mmu.c
> > @@ -147,7 +147,7 @@ m_next_vma(struct proc_maps_private *priv, struct 
> > vm_area_struct *vma)
> >  static void m_cache_vma(struct seq_file *m, struct vm_area_struct *vma)
> >  {
> >     if (m->count < m->size) /* vma is copied successfully */
> > -           m->version = m_next_vma(m->private, vma) ? vma->vm_start : -1UL;
> > +           m->version = m_next_vma(m->private, vma) ? vma->vm_end : -1UL;
> >  }
> 
> OK.
> 
> >  static void *m_start(struct seq_file *m, loff_t *ppos)
> > @@ -176,14 +176,14 @@ static void *m_start(struct seq_file *m, loff_t *ppos)
> >  
> >     if (last_addr) {
> >             vma = find_vma(mm, last_addr);
> > -           if (vma && (vma = m_next_vma(priv, vma)))
> > +           if (vma)
> >                     return vma;
> >     }
> 
> I think we can simplify this patch. And imo make it better. How about

it is certainly less subtle because it doesn't report "sub-vmas".

>       if (last_addr) {
>               vma = find_vma(mm, last_addr - 1);
>               if (vma && vma->vm_start <= last_addr)
>                       vma = m_next_vma(priv, vma);
>               if (vma)
>                       return vma;
>       }

we would still miss a VMA if the last one got shrunk/split but at least
it would provide monotonic results. So definitely an improvement but
I guess we really want to document that only full reads provide a
consistent (at some moment in time) output.

-- 
Michal Hocko
SUSE Labs

Reply via email to