On Wed, Jun 11, 2014 at 11:12:13AM +0900, Minchan Kim wrote:
> On Mon, Jun 09, 2014 at 11:26:17AM +0200, Vlastimil Babka wrote:
> > Unlike the migration scanner, the free scanner remembers the beginning of 
> > the
> > last scanned pageblock in cc->free_pfn. It might be therefore rescanning 
> > pages
> > uselessly when called several times during single compaction. This might 
> > have
> > been useful when pages were returned to the buddy allocator after a failed
> > migration, but this is no longer the case.
> > 
> > This patch changes the meaning of cc->free_pfn so that if it points to a
> > middle of a pageblock, that pageblock is scanned only from cc->free_pfn to 
> > the
> > end. isolate_freepages_block() will record the pfn of the last page it 
> > looked
> > at, which is then used to update cc->free_pfn.
> > 
> > In the mmtests stress-highalloc benchmark, this has resulted in lowering the
> > ratio between pages scanned by both scanners, from 2.5 free pages per 
> > migrate
> > page, to 2.25 free pages per migrate page, without affecting success rates.
> > 
> > Signed-off-by: Vlastimil Babka <vba...@suse.cz>
> Reviewed-by: Minchan Kim <minc...@kernel.org>
> 
> Below is a nitpick.
> 
> > Cc: Minchan Kim <minc...@kernel.org>
> > Cc: Mel Gorman <mgor...@suse.de>
> > Cc: Joonsoo Kim <iamjoonsoo....@lge.com>
> > Cc: Michal Nazarewicz <min...@mina86.com>
> > Cc: Naoya Horiguchi <n-horigu...@ah.jp.nec.com>
> > Cc: Christoph Lameter <c...@linux.com>
> > Cc: Rik van Riel <r...@redhat.com>
> > Cc: David Rientjes <rient...@google.com>
> > ---
> >  mm/compaction.c | 33 ++++++++++++++++++++++++++++-----
> >  1 file changed, 28 insertions(+), 5 deletions(-)
> > 
> > diff --git a/mm/compaction.c b/mm/compaction.c
> > index 83f72bd..58dfaaa 100644
> > --- a/mm/compaction.c
> > +++ b/mm/compaction.c
> > @@ -297,7 +297,7 @@ static bool suitable_migration_target(struct page *page)
> >   * (even though it may still end up isolating some pages).
> >   */
> >  static unsigned long isolate_freepages_block(struct compact_control *cc,
> > -                           unsigned long blockpfn,
> > +                           unsigned long *start_pfn,
> >                             unsigned long end_pfn,
> >                             struct list_head *freelist,
> >                             bool strict)
> > @@ -306,6 +306,7 @@ static unsigned long isolate_freepages_block(struct 
> > compact_control *cc,
> >     struct page *cursor, *valid_page = NULL;
> >     unsigned long flags;
> >     bool locked = false;
> > +   unsigned long blockpfn = *start_pfn;
> >  
> >     cursor = pfn_to_page(blockpfn);
> >  
> > @@ -314,6 +315,9 @@ static unsigned long isolate_freepages_block(struct 
> > compact_control *cc,
> >             int isolated, i;
> >             struct page *page = cursor;
> >  
> > +           /* Record how far we have got within the block */
> > +           *start_pfn = blockpfn;
> > +
> 
> Couldn't we move this out of the loop for just one store?

Hello, Vlastimil.

Moreover, start_pfn can't be updated to end pfn with this approach.
Is it okay?

Thanks.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Reply via email to