On Wed, Apr 16, 2025 at 08:36:21AM -0700, Shradha Gupta wrote: > Currently, the MANA driver allocates pci vector statically based on > MANA_MAX_NUM_QUEUES and num_online_cpus() values and in some cases ends > up allocating more vectors than it needs. > This is because, by this time we do not have a HW channel and do not know > how many IRQs should be allocated. > To avoid this, we allocate 1 IRQ vector during the creation of HWC and > after getting the value supported by hardware, dynamically add the > remaining vectors. > > Signed-off-by: Shradha Gupta <shradhagu...@linux.microsoft.com> > Reviewed-by: Haiyang Zhang <haiya...@microsoft.com>
Hi Shradha, Some minor nits from my side. ... > diff --git a/drivers/net/ethernet/microsoft/mana/gdma_main.c > b/drivers/net/ethernet/microsoft/mana/gdma_main.c ... > @@ -465,9 +475,10 @@ static int mana_gd_register_irq(struct gdma_queue *queue, > struct gdma_irq_context *gic; > struct gdma_context *gc; > unsigned int msi_index; > - unsigned long flags; > + struct list_head *pos; > + unsigned long flags, flag_irq; > struct device *dev; > - int err = 0; > + int err = 0, count; As this is Networking code, please preserve the arrangement of local variables in reverse xmas tree order - longest line to shortest. Edward Cree's tool can be useful in this area: https://github.com/ecree-solarflare/xmastree > > gc = gd->gdma_context; > dev = gc->dev; > @@ -482,7 +493,22 @@ static int mana_gd_register_irq(struct gdma_queue *queue, > } > > queue->eq.msix_index = msi_index; > - gic = &gc->irq_contexts[msi_index]; > + > + /* get the msi_index value from the list*/ > + count = 0; > + spin_lock_irqsave(&gc->irq_ctxs_lock, flag_irq); > + list_for_each(pos, &gc->irq_contexts) { > + if (count == msi_index) { > + gic = list_entry(pos, struct gdma_irq_context, > gic_list); Please consider line wrapping to 80 columns or less, as is still preferred in Networking code. Likewise elsewhere in this patch. checkpatch.pl --max-line-length=80 can be helpful here. > + break; > + } > + > + count++; > + } > + spin_unlock_irqrestore(&gc->irq_ctxs_lock, flag_irq); > + > + if (!gic) > + return -1; > > spin_lock_irqsave(&gic->lock, flags); > list_add_rcu(&queue->entry, &gic->eq_list); > @@ -497,8 +523,10 @@ static void mana_gd_deregiser_irq(struct gdma_queue > *queue) > struct gdma_irq_context *gic; > struct gdma_context *gc; > unsigned int msix_index; > - unsigned long flags; > + struct list_head *pos; > + unsigned long flags, flag_irq; > struct gdma_queue *eq; > + int count; Reverse xmas tree here too. > > gc = gd->gdma_context; > > @@ -507,7 +535,22 @@ static void mana_gd_deregiser_irq(struct gdma_queue > *queue) > if (WARN_ON(msix_index >= gc->num_msix_usable)) > return; > > - gic = &gc->irq_contexts[msix_index]; > + /* get the msi_index value from the list*/ > + count = 0; > + spin_lock_irqsave(&gc->irq_ctxs_lock, flag_irq); > + list_for_each(pos, &gc->irq_contexts) { > + if (count == msix_index) { > + gic = list_entry(pos, struct gdma_irq_context, > gic_list); > + break; > + } > + > + count++; > + } > + spin_unlock_irqrestore(&gc->irq_ctxs_lock, flag_irq); > + Does gic need to be initialised to NULL before the list_for_each loop to ensure that it is always initialised here? Flagged by Clang 20.1.2 KCFLAGS=-Wsometimes-uninitialized builds, and Smatch > + if (!gic) > + return; > + > spin_lock_irqsave(&gic->lock, flags); > list_for_each_entry_rcu(eq, &gic->eq_list, entry) { > if (queue == eq) { ... > @@ -1317,29 +1372,92 @@ static int irq_setup(unsigned int *irqs, unsigned int > len, int node) > return 0; > } > > -static int mana_gd_setup_irqs(struct pci_dev *pdev) > +static int mana_gd_setup_dyn_irqs(struct pci_dev *pdev, int nvec) > { > struct gdma_context *gc = pci_get_drvdata(pdev); > - unsigned int max_queues_per_port; > struct gdma_irq_context *gic; > - unsigned int max_irqs, cpu; > - int start_irq_index = 1; > - int nvec, *irqs, irq; > + int *irqs, irq, skip_first_cpu = 0; > + unsigned long flags; > int err, i = 0, j; Reverse xmas tree here too. > > cpus_read_lock(); > - max_queues_per_port = num_online_cpus(); > - if (max_queues_per_port > MANA_MAX_NUM_QUEUES) > - max_queues_per_port = MANA_MAX_NUM_QUEUES; > + spin_lock_irqsave(&gc->irq_ctxs_lock, flags); > + irqs = kmalloc_array(nvec, sizeof(int), GFP_KERNEL); > + if (!irqs) { > + err = -ENOMEM; > + goto free_irq_vector; > + } ...