On Fri, Jun 28, 2013 at 03:29:25PM +0100, Mel Gorman wrote: > > Oh duh indeed. I totally missed it did that. Changelog also isn't giving > > rationale for this. Mel? > > > > There were a few reasons > > First, if there are many tasks sharing the page then they'll all move towards > the same node. The node will be compute overloaded and then scheduled away > later only to bounce back again. Alternatively the shared tasks would > just bounce around nodes because the fault information is effectively > noise. Either way I felt that accounting for shared faults with private > faults would be slower overall. > > The second reason was based on a hypothetical workload that had a small > number of very important, heavily accessed private pages but a large shared > array. The shared array would dominate the number of faults and be selected > as a preferred node even though it's the wrong decision. > > The third reason was because multiple threads in a process will race > each other to fault the shared page making the information unreliable. > > It is important that *something* be done with shared faults but I haven't > thought of what exactly yet. One possibility would be to give them a > different weight, maybe based on the number of active NUMA nodes, but I had > not tested anything yet. Peter suggested privately that if shared faults > dominate the workload that the shared pages would be migrated based on an > interleave policy which has some potential. >
It would be good to put something like this in the Changelog, or even as a comment near how we select the preferred node. -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/