On Wed, Jun 26, 2013 at 03:38:06PM +0100, Mel Gorman wrote:
> +void task_numa_fault(int last_nid, int node, int pages, bool migrated)
>  {
>       struct task_struct *p = current;
> +     int priv = (cpu_to_node(task_cpu(p)) == last_nid);
>  
>       if (!sched_feat_numa(NUMA))
>               return;
>  
>       /* Allocate buffer to track faults on a per-node basis */
>       if (unlikely(!p->numa_faults)) {
> -             int size = sizeof(*p->numa_faults) * nr_node_ids;
> +             int size = sizeof(*p->numa_faults) * 2 * nr_node_ids;
>  
>               /* numa_faults and numa_faults_buffer share the allocation */
> -             p->numa_faults = kzalloc(size * 2, GFP_KERNEL);
> +             p->numa_faults = kzalloc(size * 4, GFP_KERNEL);
>               if (!p->numa_faults)
>                       return;

So you need a buffer 2x the size in total; but you're now allocating
a buffer 4x larger than before.

Isn't doubling size alone sufficient?
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Reply via email to