From: Greg KH <gre...@linuxfoundation.org> 3.0-stable review patch. If anyone has any objections, please let me know.
------------------ From: Johannes Weiner <jwei...@redhat.com> commit a4d3e9e76337059406fcf3ead288c0df22a790e9 upstream. Stable note: Not tracked in Bugzilla. This patch augments an earlier commit that avoids scanning priority being artificially raised. The older fix was particularly important for small memcgs to avoid calling wait_iff_congested() unnecessarily. Without swap, anonymous pages are not scanned. As such, they should not count when considering force-scanning a small target if there is no swap. Otherwise, targets are not force-scanned even when their effective scan number is zero and the other conditions--kswapd/memcg--apply. This fixes 246e87a93934 ("memcg: fix get_scan_count() for small targets"). [a...@linux-foundation.org: fix comment] Signed-off-by: Johannes Weiner <jwei...@redhat.com> Acked-by: KAMEZAWA Hiroyuki <kamezawa.hir...@jp.fujitsu.com> Reviewed-by: Michal Hocko <mho...@suse.cz> Cc: Ying Han <ying...@google.com> Cc: Balbir Singh <bsinghar...@gmail.com> Cc: KOSAKI Motohiro <kosaki.motoh...@jp.fujitsu.com> Cc: Daisuke Nishimura <nishim...@mxp.nes.nec.co.jp> Acked-by: Mel Gorman <m...@csn.ul.ie> Signed-off-by: Andrew Morton <a...@linux-foundation.org> Signed-off-by: Linus Torvalds <torva...@linux-foundation.org> Signed-off-by: Mel Gorman <mgor...@suse.de> Signed-off-by: Greg Kroah-Hartman <gre...@linuxfoundation.org> --- mm/vmscan.c | 27 ++++++++++++--------------- 1 file changed, 12 insertions(+), 15 deletions(-) --- a/mm/vmscan.c +++ b/mm/vmscan.c @@ -1747,23 +1747,15 @@ static void get_scan_count(struct zone * u64 fraction[2], denominator; enum lru_list l; int noswap = 0; - int force_scan = 0; + bool force_scan = false; unsigned long nr_force_scan[2]; - - anon = zone_nr_lru_pages(zone, sc, LRU_ACTIVE_ANON) + - zone_nr_lru_pages(zone, sc, LRU_INACTIVE_ANON); - file = zone_nr_lru_pages(zone, sc, LRU_ACTIVE_FILE) + - zone_nr_lru_pages(zone, sc, LRU_INACTIVE_FILE); - - if (((anon + file) >> priority) < SWAP_CLUSTER_MAX) { - /* kswapd does zone balancing and need to scan this zone */ - if (scanning_global_lru(sc) && current_is_kswapd()) - force_scan = 1; - /* memcg may have small limit and need to avoid priority drop */ - if (!scanning_global_lru(sc)) - force_scan = 1; - } + /* kswapd does zone balancing and needs to scan this zone */ + if (scanning_global_lru(sc) && current_is_kswapd()) + force_scan = true; + /* memcg may have small limit and need to avoid priority drop */ + if (!scanning_global_lru(sc)) + force_scan = true; /* If we have no swap space, do not bother scanning anon pages. */ if (!sc->may_swap || (nr_swap_pages <= 0)) { @@ -1776,6 +1768,11 @@ static void get_scan_count(struct zone * goto out; } + anon = zone_nr_lru_pages(zone, sc, LRU_ACTIVE_ANON) + + zone_nr_lru_pages(zone, sc, LRU_INACTIVE_ANON); + file = zone_nr_lru_pages(zone, sc, LRU_ACTIVE_FILE) + + zone_nr_lru_pages(zone, sc, LRU_INACTIVE_FILE); + if (scanning_global_lru(sc)) { free = zone_page_state(zone, NR_FREE_PAGES); /* If we have very few page cache pages, -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/