On Mon, Mar 24, 2014 at 10:33 AM, Rik van Riel <r...@redhat.com> wrote: > On 03/21/2014 05:17 PM, John Stultz wrote: >> >> Currently we don't shrink/scan the anonymous lrus when swap is off. >> This is problematic for volatile range purging on swapless systems/ >> >> This patch naievely changes the vmscan code to continue scanning >> and shrinking the lrus even when there is no swap. >> >> It obviously has performance issues. >> >> Thoughts on how best to implement this would be appreciated. >> >> Cc: Andrew Morton <a...@linux-foundation.org> >> Cc: Android Kernel Team <kernel-t...@android.com> >> Cc: Johannes Weiner <han...@cmpxchg.org> >> Cc: Robert Love <rl...@google.com> >> Cc: Mel Gorman <m...@csn.ul.ie> >> Cc: Hugh Dickins <hu...@google.com> >> Cc: Dave Hansen <d...@sr71.net> >> Cc: Rik van Riel <r...@redhat.com> >> Cc: Dmitry Adamushko <dmitry.adamus...@gmail.com> >> Cc: Neil Brown <ne...@suse.de> >> Cc: Andrea Arcangeli <aarca...@redhat.com> >> Cc: Mike Hommey <m...@glandium.org> >> Cc: Taras Glek <tg...@mozilla.com> >> Cc: Jan Kara <j...@suse.cz> >> Cc: KOSAKI Motohiro <kosaki.motoh...@gmail.com> >> Cc: Michel Lespinasse <wal...@google.com> >> Cc: Minchan Kim <minc...@kernel.org> >> Cc: linux...@kvack.org <linux...@kvack.org> >> Signed-off-by: John Stultz <john.stu...@linaro.org> >> --- >> mm/vmscan.c | 26 ++++---------------------- >> 1 file changed, 4 insertions(+), 22 deletions(-) >> >> diff --git a/mm/vmscan.c b/mm/vmscan.c >> index 34f159a..07b0a8c 100644 >> --- a/mm/vmscan.c >> +++ b/mm/vmscan.c >> @@ -155,9 +155,8 @@ static unsigned long zone_reclaimable_pages(struct >> zone *zone) >> nr = zone_page_state(zone, NR_ACTIVE_FILE) + >> zone_page_state(zone, NR_INACTIVE_FILE); >> >> - if (get_nr_swap_pages() > 0) >> - nr += zone_page_state(zone, NR_ACTIVE_ANON) + >> - zone_page_state(zone, NR_INACTIVE_ANON); >> + nr += zone_page_state(zone, NR_ACTIVE_ANON) + >> + zone_page_state(zone, NR_INACTIVE_ANON); >> >> return nr; > > > Not all of the anonymous pages will be reclaimable. > > Is there some counter that keeps track of how many > volatile range pages there are in each zone?
So right, keeping statistics like NR_VOLATILE_PAGES (as well as possibly NR_PURGED_VOLATILE_PAGES), would likely help here. >> @@ -2181,8 +2166,8 @@ static inline bool should_continue_reclaim(struct >> zone *zone, >> */ >> pages_for_compaction = (2UL << sc->order); >> inactive_lru_pages = zone_page_state(zone, NR_INACTIVE_FILE); >> - if (get_nr_swap_pages() > 0) >> - inactive_lru_pages += zone_page_state(zone, >> NR_INACTIVE_ANON); >> + inactive_lru_pages += zone_page_state(zone, NR_INACTIVE_ANON); >> + >> if (sc->nr_reclaimed < pages_for_compaction && >> inactive_lru_pages > pages_for_compaction) > > > Not sure this is a good idea, since the pages may not actually > be reclaimable, and the inactive list will continue to be > refilled indefinitely... > > If there was a counter of the number of volatile range pages > in a zone, this would be easier. > > Of course, the overhead of keeping such a counter might be > too high for what volatile ranges are designed for... I started looking at something like this, but it runs into some complexity when we're keeping volatility as a flag in the vma rather then as a page state. Also, even with a rough attempt at tracking of the number of volatile pages, it seemed naively plugging that in for NR_INACTIVE_ANON here was problematic, since we would scan for a shorter time, but but wouldn't necessarily find the volatile pages in that time, causing us not to always purge the volatile pages. Part of me starts to wonder if a new LRU for volatile pages would be needed to really be efficient here, but then I worry the moving of the pages back and forth might be too expensive. Thanks so much for the review and comments! -john -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/