Hi,

I am testing a pipeline application with two or more cores using dpdk 19.05.
The application consists of:
       Core1: forever get packets from an Ethernet IF (using rte_eth_rx_burst)
                  Inspect packet header such as EtherType, UDP desp_port, etc; 
determine application (say app_1, app_2, etc)
                  Forward to rte_ring of app_i (call it app_i_ring)
      Core2: Specialized for app_1 processing,  has an RX rte_ring (call it 
app_1_ring)  and a app_1 pipeline consisting of a few Hash/Array tables
      Core3: Specialized for app_2 processing,  has an RX rte_ring  (call it 
app_2_ring) and a app_2 pipeline consisting of a few Hash/Array tables

When I run this application with core1-3, it works fine without any table miss.
When I add a second app_1 or app_2 core (for instance adding core4 running 
app_1), I get about 0.05% table miss of app_1 hash tables.
The only difference in the core1-3 and core1-4 config setup is that app_1 has 
two cores simultaneously running its pipeline instance and doing lookup on the 
same set of tables.
Please note that I have logged the missed lookup packets and the key in 
metadata and the keys are correct when the miss happens.
Any reason for this table miss? Am I missing something?

Thanks,
Mehrdad
malip...@ciena.com

Reply via email to