On Fri, Apr 18, 2014 at 04:48:25PM -0400, John Stoffel wrote:
> > "Andrew" == Andrew Morton writes:
>
> Andrew> On Tue, 8 Apr 2014 09:22:58 +0100 Mel Gorman wrote:
> >> Changelog since v1
> >> o topology comment updates
> >>
> >> When it was introduced, zone_reclaim_mode made sense as NUMA
> "Andrew" == Andrew Morton writes:
Andrew> On Tue, 8 Apr 2014 09:22:58 +0100 Mel Gorman wrote:
>> Changelog since v1
>> o topology comment updates
>>
>> When it was introduced, zone_reclaim_mode made sense as NUMA distances
>> punished and workloads were generally partitioned to fit into
On Tue, 8 Apr 2014 09:22:58 +0100 Mel Gorman wrote:
> Changelog since v1
> o topology comment updates
>
> When it was introduced, zone_reclaim_mode made sense as NUMA distances
> punished and workloads were generally partitioned to fit into a NUMA
> node. NUMA machines are now common but few o
On Fri, 18 Apr 2014, Michal Hocko wrote:
> Auto-enabling caused so many reports in the past that it is definitely
> much better to not be clever and let admins enable zone_reclaim where it
> is appropriate instead.
>
> For both patches.
> Acked-by: Michal Hocko
I did not get any objections from
On Mon 07-04-14 23:34:26, Mel Gorman wrote:
> When it was introduced, zone_reclaim_mode made sense as NUMA distances
> punished and workloads were generally partitioned to fit into a NUMA
> node. NUMA machines are now common but few of the workloads are NUMA-aware
> and it's routine to see major pe
On 08/04/14 23:58, Christoph Lameter wrote:
The reason that zone reclaim is on by default is that off node accesses
are a big performance hit on large scale NUMA systems (like ScaleMP and
SGI). Zone reclaim was written *because* those system experienced severe
performance degradation.
On the tig
On Tue, Apr 08, 2014 at 03:56:49PM -0400, Josh Berkus wrote:
> On 04/08/2014 03:53 PM, Robert Haas wrote:
> > In an ideal world, the kernel would put the hottest pages on the local
> > node and the less-hot pages on remote nodes, moving pages around as
> > the workload shifts. In practice, that's
On Tue, Apr 08, 2014 at 05:58:21PM -0500, Christoph Lameter wrote:
> On Tue, 8 Apr 2014, Robert Haas wrote:
>
> > Well, as Josh quite rightly said, the hit from accessing remote memory
> > is never going to be as large as the hit from disk. If and when there
> > is a machine where remote memory i
On Tue, 8 Apr 2014, Robert Haas wrote:
> Well, as Josh quite rightly said, the hit from accessing remote memory
> is never going to be as large as the hit from disk. If and when there
> is a machine where remote memory is more expensive to access than
> disk, that's a good argument for zone_recla
On 04/08/2014 03:53 PM, Robert Haas wrote:
> In an ideal world, the kernel would put the hottest pages on the local
> node and the less-hot pages on remote nodes, moving pages around as
> the workload shifts. In practice, that's probably pretty hard.
> Fortunately, it's not nearly as important as
On Tue, Apr 8, 2014 at 10:17 AM, Christoph Lameter wrote:
> Another solution here would be to increase the threshhold so that
> 4 socket machines do not enable zone reclaim by default. The larger the
> NUMA system is the more memory is off node from the perspective of a
> processor and the larger
On 04/08/2014 10:17 AM, Christoph Lameter wrote:
> Another solution here would be to increase the threshhold so that
> 4 socket machines do not enable zone reclaim by default. The larger the
> NUMA system is the more memory is off node from the perspective of a
> processor and the larger the hit f
On 2014-04-08 09:17:04 -0500, Christoph Lameter wrote:
> On Tue, 8 Apr 2014, Vlastimil Babka wrote:
>
> > On 04/08/2014 12:34 AM, Mel Gorman wrote:
> > > When it was introduced, zone_reclaim_mode made sense as NUMA distances
> > > punished and workloads were generally partitioned to fit into a NUM
On Tue, 8 Apr 2014, Vlastimil Babka wrote:
> On 04/08/2014 12:34 AM, Mel Gorman wrote:
> > When it was introduced, zone_reclaim_mode made sense as NUMA distances
> > punished and workloads were generally partitioned to fit into a NUMA
> > node. NUMA machines are now common but few of the workloads
Changelog since v1
o topology comment updates
When it was introduced, zone_reclaim_mode made sense as NUMA distances
punished and workloads were generally partitioned to fit into a NUMA
node. NUMA machines are now common but few of the workloads are NUMA-aware
and it's routine to see major perfor
On 04/08/2014 12:34 AM, Mel Gorman wrote:
When it was introduced, zone_reclaim_mode made sense as NUMA distances
punished and workloads were generally partitioned to fit into a NUMA
node. NUMA machines are now common but few of the workloads are NUMA-aware
and it's routine to see major performanc
When it was introduced, zone_reclaim_mode made sense as NUMA distances
punished and workloads were generally partitioned to fit into a NUMA
node. NUMA machines are now common but few of the workloads are NUMA-aware
and it's routine to see major performance due to zone_reclaim_mode being
disabled bu
17 matches
Mail list logo