On 9/22/2020 11:29 AM, Li RongQing wrote:
Current Rx round robin policy for the slaves has two issue:

1. active_slave in bond_dev_private is shared by multiple PMDS
which maybe cause some slave Rx hungry, for example, there
is two PMD and two slave port, both PMDs start to receive, and
see that active_slave is 0, and receive from slave 0, after
complete, they increase active_slave by one, totally active_slave
are increased by two, next time, they will start to receive
from slave 0 again, at last, slave 1 maybe drop packets during
to not be polled by PMD

2. active_slave is shared and written by multiple PMD in RX path
for every time RX, this is a kind of cache false share, low
performance.

so move active_slave from bond_dev_private to bond_rx_queue
make it as per queue variable

Signed-off-by: Li RongQing <lirongq...@baidu.com>
Signed-off-by: Dongsheng Rong <rongdongsh...@baidu.com>
>

Fixes: ae2a04864a9a ("net/bonding: reduce slave starvation on Rx poll")
Cc: sta...@dpdk.org

For series,
Reviewed-by: Wei Hu (Xavier) <xavier.hu...@huawei.com>

Series applied to dpdk-next-net/main, thanks.


Reply via email to