Have you ever experienced this problem in practice? I ask because I am considering some fixes that would limit the number of slaves to a more reasonable number (and reduce the over stack usage of the bonding driver in general).

On 3/21/19 4:28 PM, David Marchand wrote:
From: Zhaohui <zhaoh...@huawei.com>

slave aggregator_port_id is in [0, RTE_MAX_ETHPORTS-1] range.
If RTE_MAX_ETHPORTS is > 8, we can hit out of bound accesses on
agg_bandwidth[] and agg_count[] arrays.

Fixes: 6d72657ce379 ("net/bonding: add other aggregator modes")
Cc: sta...@dpdk.org

Signed-off-by: Zhaohui <zhaoh...@huawei.com>
Signed-off-by: David Marchand <david.march...@redhat.com>

Acked-by: Chas Williams <ch...@att.com>

---
  drivers/net/bonding/rte_eth_bond_8023ad.c | 4 ++--
  1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/drivers/net/bonding/rte_eth_bond_8023ad.c 
b/drivers/net/bonding/rte_eth_bond_8023ad.c
index 3943ec1..5004898 100644
--- a/drivers/net/bonding/rte_eth_bond_8023ad.c
+++ b/drivers/net/bonding/rte_eth_bond_8023ad.c
@@ -669,8 +669,8 @@
        struct port *agg, *port;
        uint16_t slaves_count, new_agg_id, i, j = 0;
        uint16_t *slaves;
-       uint64_t agg_bandwidth[8] = {0};
-       uint64_t agg_count[8] = {0};
+       uint64_t agg_bandwidth[RTE_MAX_ETHPORTS] = {0};
+       uint64_t agg_count[RTE_MAX_ETHPORTS] = {0};
        uint16_t default_slave = 0;
        uint16_t mode_count_id;
        uint16_t mode_band_id;

Reply via email to