On 01/26/2014 08:32 AM, Ingo Molnar wrote: > > * Prarit Bhargava <pra...@redhat.com> wrote: > >> On 01/25/2014 03:02 AM, Ingo Molnar wrote: >>> >>> * Yinghai Lu <ying...@kernel.org> wrote: >>> >>>> Fix warning: >>>> arch/x86/kernel/irq.c: In function check_irq_vectors_for_cpu_disable: >>>> arch/x86/kernel/irq.c:337:1: warning: the frame size of 2052 bytes is >>>> larger than 2048 bytes >>>> >>>> when NR_CPUS=8192 >>>> >>>> We should use zalloc_cpumask_var() instead. >>>> >>>> -v2: update to GFP_ATOMIC instead and free the allocated cpumask at last. >>>> >>>> Signed-off-by: Yinghai Lu <ying...@kernel.org> >>>> Cc: Prarit Bhargava <pra...@redhat.com> >>>> >>>> --- >>>> arch/x86/kernel/irq.c | 24 +++++++++++++++++------- >>>> 1 file changed, 17 insertions(+), 7 deletions(-) >>>> >>>> Index: linux-2.6/arch/x86/kernel/irq.c >>>> =================================================================== >>>> --- linux-2.6.orig/arch/x86/kernel/irq.c >>>> +++ linux-2.6/arch/x86/kernel/irq.c >>>> @@ -277,11 +277,18 @@ int check_irq_vectors_for_cpu_disable(vo >>>> unsigned int this_cpu, vector, this_count, count; >>>> struct irq_desc *desc; >>>> struct irq_data *data; >>>> - struct cpumask affinity_new, online_new; >>>> + cpumask_var_t affinity_new, online_new; >>>> + >>>> + if (!alloc_cpumask_var(&affinity_new, GFP_ATOMIC)) >>>> + return -ENOMEM; >>>> + if (!alloc_cpumask_var(&online_new, GFP_ATOMIC)) { >>>> + free_cpumask_var(affinity_new); >>>> + return -ENOMEM; >>>> + } >>> >>> Atomic allocations can fail easily if the system is under duress. >> >> Then the hotplug attempt will fail which IMO is okay. [...] > > Which is not OK at all for reliable operation, if the system has > otherwise gobs of RAM, which just don't happen to be atomic > allocatable!
Ingo, I'm really not sure what other option there is here. Care to suggest one? P. -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/