On Fri, Mar 20, 2026 at 11:15:52AM +0100, Vlastimil Babka (SUSE) wrote:
> On 3/18/26 16:50, Lorenzo Stoakes (Oracle) wrote:
> > Now we have established a good foundation for vm_flags_t to vma_flags_t
> > changes, update mm/vma.c to utilise vma_flags_t wherever possible.
> >
> > We are able to convert VM_STARTGAP_FLAGS entirely as this is only used in
> > mm/vma.c, and to account for the fact we can't use VM_NONE to make life
> > easier, place the definition of this within existing #ifdef's to be
> > cleaner.
> >
> > Generally the remaining changes are mechanical.
> >
> > Also update the VMA tests to reflect the changes.
> >
> > Signed-off-by: Lorenzo Stoakes (Oracle) <[email protected]>
>
> Acked-by: Vlastimil Babka (SUSE) <[email protected]>

Thanks!

>
> Nits:
>
> > @@ -2338,8 +2339,11 @@ void mm_drop_all_locks(struct mm_struct *mm)
> >   * We account for memory if it's a private writeable mapping,
> >   * not hugepages and VM_NORESERVE wasn't set.
> >   */
> > -static bool accountable_mapping(struct file *file, vm_flags_t vm_flags)
> > +static bool accountable_mapping(struct mmap_state *map)
> >  {
> > +   const struct file *file = map->file;
> > +   vma_flags_t mask;
> > +
> >     /*
> >      * hugetlb has its own accounting separate from the core VM
> >      * VM_HUGETLB may not be set yet so we cannot check for that flag.
> > @@ -2347,7 +2351,9 @@ static bool accountable_mapping(struct file *file, 
> > vm_flags_t vm_flags)
> >     if (file && is_file_hugepages(file))
> >             return false;
> >
> > -   return (vm_flags & (VM_NORESERVE | VM_SHARED | VM_WRITE)) == VM_WRITE;
> > +   mask = vma_flags_and(&map->vma_flags, VMA_NORESERVE_BIT, VMA_SHARED_BIT,
> > +                        VMA_WRITE_BIT);
> > +   return vma_flags_same(&mask, VMA_WRITE_BIT);
>
> Another case of possible refactor, if you agree with those pointed out in
> earlier patch.

Ack

>
> >  }
> >
> >  /*
>
> > @@ -2993,7 +2998,8 @@ unsigned long unmapped_area(struct 
> > vm_unmapped_area_info *info)
> >     gap = vma_iter_addr(&vmi) + info->start_gap;
> >     gap += (info->align_offset - gap) & info->align_mask;
> >     tmp = vma_next(&vmi);
> > -   if (tmp && (tmp->vm_flags & VM_STARTGAP_FLAGS)) { /* Avoid prev check 
> > if possible */
> > +   /* Avoid prev check if possible */
> > +   if (tmp && (vma_test_any_mask(tmp, VMA_STARTGAP_FLAGS))) {
>
> The parentheses around function call not necessary?

True, can fix up.

>
> >             if (vm_start_gap(tmp) < gap + length - 1) {
> >                     low_limit = tmp->vm_end;
> >                     vma_iter_reset(&vmi);
> > @@ -3045,7 +3051,8 @@ unsigned long unmapped_area_topdown(struct 
> > vm_unmapped_area_info *info)
> >     gap -= (gap - info->align_offset) & info->align_mask;
> >     gap_end = vma_iter_end(&vmi);
> >     tmp = vma_next(&vmi);
> > -   if (tmp && (tmp->vm_flags & VM_STARTGAP_FLAGS)) { /* Avoid prev check 
> > if possible */
> > +    /* Avoid prev check if possible */
> > +   if (tmp && (vma_test_any_mask(tmp, VMA_STARTGAP_FLAGS))) {
>
> Same.

True, will fix up.

>
> >             if (vm_start_gap(tmp) < gap_end) {
> >                     high_limit = vm_start_gap(tmp);
> >                     vma_iter_reset(&vmi);

Cheers, Lorenzo

Reply via email to