On 06/03/2013 10:01 PM, Minchan Kim wrote:
>> > +static int __remove_mapping_batch(struct list_head *remove_list,
>> > +                            struct list_head *ret_pages,
>> > +                            struct list_head *free_pages)
>> > +{
>> > +  int nr_reclaimed = 0;
>> > +  struct address_space *mapping;
>> > +  struct page *page;
>> > +  LIST_HEAD(need_free_mapping);
>> > +
>> > +  while (!list_empty(remove_list)) {
...
>> > +          if (!__remove_mapping(mapping, page)) {
>> > +                  unlock_page(page);
>> > +                  list_add(&page->lru, ret_pages);
>> > +                  continue;
>> > +          }
>> > +          list_add(&page->lru, &need_free_mapping);
...
> +     spin_unlock_irq(&mapping->tree_lock);
> +     while (!list_empty(&need_free_mapping)) {...
> +             list_move(&page->list, free_pages);
> +             mapping_release_page(mapping, page);
> +     }
> Why do we need new lru list instead of using @free_pages?

I actually tried using @free_pages at first.  The problem is that we
need to call mapping_release_page() without the radix tree lock held so
we can not do it in the first while() loop.

'free_pages' is a list created up in shrink_page_list().  There can be
several calls to __remove_mapping_batch() for each call to
shrink_page_list().

'need_free_mapping' lets us temporarily differentiate the pages that we
need to call mapping_release_page()/unlock_page() on versus the ones on
'free_pages' which have already had that done.

We could theoretically delay _all_ of the
release_mapping_page()/unlock_page() operations until the _entire_
shrink_page_list() operation is done, but doing this really helps with
lock_page() latency.

Does that make sense?
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Reply via email to