BTW, in my little test, the median ->count was 10, and the mean was 45. On 09/11/2013 04:21 PM, Cody P Schafer wrote: > Also, we may want to consider shrinking pcp->high down from 6*pcp->batch > given that the original "6*" choice was based upon ->batch actually > being 1/4th of the average pageset size, where now it appears closer to > being the average.
One other thing: we actually had a hot _and_ a cold pageset at that point, and we now share one pageset for hot and cold pages. After looking at it for a bit today, I'm not sure how much the history matters. We probably need to take a fresh look at what we want. Anybody disagree with this? 1. We want ->batch to be large enough that if all the CPUs in a zone are doing allocations constantly, there is very little contention on the zone_lock. 2. If ->high gets too large, we'll end up keeping too much memory in the pcp and __alloc_pages_direct_reclaim() will end up calling the (expensive drain_all_pages() too often). 3. We want ->high to approximate the size of the cache which is private to a given cpu. But, that's complicated by the L3 caches and hyperthreading today. 4. ->high can be a _bit_ larger than the CPU cache without it being a real problem since not _all_ the pages being freed will be fully resident in the cache. Some will be cold, some will only have a few of their cachelines resident. 5. A 0.75MB ->high seems a bit low for CPUs with 30MB of L3 cache on the socket (although 20 threads share that). I'll take one of my big systems and run it with some various ->high settings and see if it makes any difference. -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/