On Tue, Aug 20, 2019 at 01:59:32AM -0700, John Garry wrote:
> diff --git a/kernel/irq/manage.c b/kernel/irq/manage.c
> index e8f7f179bf77..cb483a055512 100644
> --- a/kernel/irq/manage.c
> +++ b/kernel/irq/manage.c
> @@ -966,9 +966,13 @@ irq_thread_check_affinity(struct irq_desc *desc, 
> struct irqaction *action)
>        * mask pointer. For CPU_MASK_OFFSTACK=n this is optimized out.
>        */
>       if (cpumask_available(desc->irq_common_data.affinity)) {
> +             struct irq_data *irq_data = &desc->irq_data;
>               const struct cpumask *m;
> 
> -             m = irq_data_get_effective_affinity_mask(&desc->irq_data);
> +             if (action->flags & IRQF_IRQ_AFFINITY)
> +                     m = desc->irq_common_data.affinity;
> +             else
> +                     m = irq_data_get_effective_affinity_mask(irq_data);
>               cpumask_copy(mask, m);
>       } else {
>               valid = false;
> -- 
> 2.17.1
> 
> As Ming mentioned in that same thread, we could even make this policy 
> for managed interrupts.

Ack, I really like this option!

Reply via email to