On Tue, Jan 28, 2025 at 11:46:32AM -0500, Yury Norov wrote:
> A loop based on cpumask_next_wrap() opencodes the dedicated macro
> for_each_online_cpu_wrap(). Using the macro allows to avoid setting
> bits affinity mask more than once when stride >= num_online_cpus.
>

nit: Same comment as left in patch 2. Don't think re-iterating over
cpu's was ever possible. But I do think the patch improves readability
and simplifies things.

> This also helps to drop cpumask handling code in the caller function.
> 
> CC: Nick Child <nnac...@linux.ibm.com>
> Signed-off-by: Yury Norov <yury.no...@gmail.com>

Built/booted kernel (patch 10 is ill-formated), messed around with n
online cpu and n queues. All ibmvnic affinity values look correct.
Thanks!

Tested-by: Nick Child <nnac...@linux.ibm.com>

> ---
>  drivers/net/ethernet/ibm/ibmvnic.c | 18 +++++++++++-------
>  1 file changed, 11 insertions(+), 7 deletions(-)
> 
> diff --git a/drivers/net/ethernet/ibm/ibmvnic.c 
> b/drivers/net/ethernet/ibm/ibmvnic.c
> index e95ae0d39948..bef18ff69065 100644
> --- a/drivers/net/ethernet/ibm/ibmvnic.c
> +++ b/drivers/net/ethernet/ibm/ibmvnic.c
> @@ -234,11 +234,17 @@ static int ibmvnic_set_queue_affinity(struct 
> ibmvnic_sub_crq_queue *queue,
>               (*stragglers)--;
>       }
>       /* atomic write is safer than writing bit by bit directly */
> -     for (i = 0; i < stride; i++) {
> -             cpumask_set_cpu(*cpu, mask);
> -             *cpu = cpumask_next_wrap(*cpu, cpu_online_mask,
> -                                      nr_cpu_ids, false);
> +     for_each_online_cpu_wrap(i, *cpu) {
> +             if (!stride--) {
> +                     /* For the next queue we start from the first
> +                      * unused CPU in this queue
> +                      */
> +                     *cpu = i;
> +                     break;
> +             }
> +             cpumask_set_cpu(i, mask);
>       }
> +
>       /* set queue affinity mask */
>       cpumask_copy(queue->affinity_mask, mask);
>       rc = irq_set_affinity_and_hint(queue->irq, queue->affinity_mask);
> @@ -256,7 +262,7 @@ static void ibmvnic_set_affinity(struct ibmvnic_adapter 
> *adapter)
>       int num_rxqs = adapter->num_active_rx_scrqs, i_rxqs = 0;
>       int num_txqs = adapter->num_active_tx_scrqs, i_txqs = 0;
>       int total_queues, stride, stragglers, i;
> -     unsigned int num_cpu, cpu;
> +     unsigned int num_cpu, cpu = 0;
>       bool is_rx_queue;
>       int rc = 0;
>  
> @@ -274,8 +280,6 @@ static void ibmvnic_set_affinity(struct ibmvnic_adapter 
> *adapter)
>       stride = max_t(int, num_cpu / total_queues, 1);
>       /* number of leftover cpu's */
>       stragglers = num_cpu >= total_queues ? num_cpu % total_queues : 0;
> -     /* next available cpu to assign irq to */
> -     cpu = cpumask_next(-1, cpu_online_mask);
>  
>       for (i = 0; i < total_queues; i++) {
>               is_rx_queue = false;
> -- 
> 2.43.0
> 

Reply via email to