On Wed, Jun 10 2015, Vlastimil Babka wrote:
> The compaction free scanner is looking for PageBuddy() pages and skipping all
> others.  For large compound pages such as THP or hugetlbfs, we can save a lot
> of iterations if we skip them at once using their compound_order(). This is
> generally unsafe and we can read a bogus value of order due to a race, but if
> we are careful, the only danger is skipping too much.
>
> When tested with stress-highalloc from mmtests on 4GB system with 1GB 
> hugetlbfs
> pages, the vmstat compact_free_scanned count decreased by at least 15%.
>
> Signed-off-by: Vlastimil Babka <vba...@suse.cz>
> Cc: Minchan Kim <minc...@kernel.org>
> Cc: Mel Gorman <mgor...@suse.de>
> Cc: Joonsoo Kim <iamjoonsoo....@lge.com>
> Cc: Michal Nazarewicz <min...@mina86.com>

Acked-by: Michal Nazarewicz <min...@mina86.com>

> Cc: Naoya Horiguchi <n-horigu...@ah.jp.nec.com>
> Cc: Christoph Lameter <c...@linux.com>
> Cc: Rik van Riel <r...@redhat.com>
> Cc: David Rientjes <rient...@google.com>
> ---
>  mm/compaction.c | 25 +++++++++++++++++++++++++
>  1 file changed, 25 insertions(+)
>
> diff --git a/mm/compaction.c b/mm/compaction.c
> index e37d361..4a14084 100644
> --- a/mm/compaction.c
> +++ b/mm/compaction.c
> @@ -437,6 +437,24 @@ static unsigned long isolate_freepages_block(struct 
> compact_control *cc,
>  
>               if (!valid_page)
>                       valid_page = page;
> +
> +             /*
> +              * For compound pages such as THP and hugetlbfs, we can save
> +              * potentially a lot of iterations if we skip them at once.
> +              * The check is racy, but we can consider only valid values
> +              * and the only danger is skipping too much.
> +              */
> +             if (PageCompound(page)) {
> +                     unsigned int comp_order = compound_order(page);
> +
> +                     if (comp_order > 0 && comp_order < MAX_ORDER) {

+                       if (comp_order < MAX_ORDER) {

Might produce shorter/faster code.  Dunno.  Maybe.  So much
micro-optimisations.  Applies to the previous patch as well.

> +                             blockpfn += (1UL << comp_order) - 1;
> +                             cursor += (1UL << comp_order) - 1;
> +                     }
> +
> +                     goto isolate_fail;
> +             }
> +
>               if (!PageBuddy(page))
>                       goto isolate_fail;
>  
> @@ -496,6 +514,13 @@ isolate_fail:
>  
>       }
>  
> +     /*
> +      * There is a tiny chance that we have read bogus compound_order(),
> +      * so be careful to not go outside of the pageblock.
> +      */
> +     if (unlikely(blockpfn > end_pfn))
> +             blockpfn = end_pfn;
> +
>       trace_mm_compaction_isolate_freepages(*start_pfn, blockpfn,
>                                       nr_scanned, total_isolated);
>  
> -- 
> 2.1.4
>

-- 
Best regards,                                         _     _
.o. | Liege of Serenely Enlightened Majesty of      o' \,=./ `o
..o | Computer Science,  Michał “mina86” Nazarewicz    (o o)
ooo +--<m...@google.com>--<xmpp:min...@jabber.org>--ooO--(_)--Ooo--
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Reply via email to