On large memory systems, the VM can spend way too much time scanning through pages that it cannot (or should not) evict from memory. Not only does it use up CPU time, but it also provokes lock contention and can leave large systems under memory presure in a catatonic state.
Against 2.6.24-rc6-mm1 This patch series improves VM scalability by: 1) making the locking a little more scalable 2) putting filesystem backed, swap backed and non-reclaimable pages onto their own LRUs, so the system only scans the pages that it can/should evict from memory 3) switching to SEQ replacement for the anonymous LRUs, so the number of pages that need to be scanned when the system starts swapping is bound to a reasonable number More info on the overall design can be found at: http://linux-mm.org/PageReplacementDesign Changelog: - merge memcontroller split LRU code into the main split LRU patch, since it is not functionally different (it was split up only to help people who had seen the last version of the patch series review it) - drop the page_file_cache debugging patch, since it never triggered - reintroduce code to not scan anon list if swap is full - add code to scan anon list if page cache is very small already - use lumpy reclaim more aggressively for smaller order > 1 allocations -- All Rights Reversed -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/