Even if the current mapping is correct for the 1 CPU and 2 CPU cases
(currently enetc is included in SoCs with up to 2 CPUs only), better
use a generic rule for the mapping to cover all possible cases.
The number of CPUs is the same as the number of interrupt vectors:

Per device Tx rings -
device_tx_ring[idx], where idx = 0..n_rings_total-1

Per interrupt vector Tx rings -
int_vector[i].ring[j], where i = 0..n_int_vects-1
                             j = 0..n_rings_per_v-1

Mapping rule -
n_rings_per_v = n_rings_total / n_int_vects
for i = 0..n_int_vects - 1:
        for j = 0..n_rings_per_v - 1:
                idx = n_int_vects * j + i
                int_vector[i].ring[j] <- device_tx_ring[idx]

Signed-off-by: Claudiu Manoil <claudiu.man...@nxp.com>
---
 drivers/net/ethernet/freescale/enetc/enetc.c | 6 +-----
 1 file changed, 1 insertion(+), 5 deletions(-)

diff --git a/drivers/net/ethernet/freescale/enetc/enetc.c 
b/drivers/net/ethernet/freescale/enetc/enetc.c
index 57049ae97201..1646aaa68bd1 100644
--- a/drivers/net/ethernet/freescale/enetc/enetc.c
+++ b/drivers/net/ethernet/freescale/enetc/enetc.c
@@ -2343,11 +2343,7 @@ int enetc_alloc_msix(struct enetc_ndev_priv *priv)
                        int idx;
 
                        /* default tx ring mapping policy */
-                       if (priv->bdr_int_num == ENETC_MAX_BDR_INT)
-                               idx = 2 * j + i; /* 2 CPUs */
-                       else
-                               idx = j + i * v_tx_rings; /* default */
-
+                       idx = priv->bdr_int_num * j + i;
                        __set_bit(idx, &v->tx_rings_map);
                        bdr = &v->tx_ring[j];
                        bdr->index = idx;
-- 
2.25.1

Reply via email to