MP servers distributes Rx packets between clients according to round-robin scheme.
Current implementation always started packets distribution from the first client. That procedure resulted in uniform distribution in cases when Rx packets number was around clients number multiplication. However, if RX burst repeatedly returned single packet, round-robin scheme would not work because all packets were assigned to the first client only. The patch does not restart packets distribution from the first client. Packets distribution always continues to the next client. Cc: sta...@dpdk.org Fixes: af75078fece3 ("first public release") Signed-off-by: Gregory Etelson <getel...@nvidia.com> Acked-by: Anatoly Burakov <anatoly.bura...@intel.com> --- v2: Remove explisit static variable initialization. v3: Remove comment. v4: Spell check. --- examples/multi_process/client_server_mp/mp_server/main.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/examples/multi_process/client_server_mp/mp_server/main.c b/examples/multi_process/client_server_mp/mp_server/main.c index b4761ebc7b..f54bb8b75a 100644 --- a/examples/multi_process/client_server_mp/mp_server/main.c +++ b/examples/multi_process/client_server_mp/mp_server/main.c @@ -234,7 +234,7 @@ process_packets(uint32_t port_num __rte_unused, struct rte_mbuf *pkts[], uint16_t rx_count) { uint16_t i; - uint8_t client = 0; + static uint8_t client; for (i = 0; i < rx_count; i++) { enqueue_rx_packet(client, pkts[i]); -- 2.33.1