Hello, On Tue, Dec 27, 2022 at 7:06 AM You, KaisenX <kaisenx....@intel.com> wrote: > > > > > > > I tried to play a bit with a E810 nic on a dual numa and I > > > > > > > can't see anything wrong for now. > > > > > > > Can you provide a simple and small reproducer of your issue? > > > > > > > > > > > > > > Thanks. > > > > > > > > > > > > > This is my environment: > > > > > > Enter "lscpu" on the command line: > > > > > > NUMA: > > > > > > NUMA node(s): 2 > > > > > > NUMA node0 CPU(S) : 0-27,56-83 > > > > > > NUMA node1 CPU(S) : 28-55,84-111 > > > > > > > > > > > > List the steps to reproduce the issue: > > > > > > > > > > > > 1. create vf and blind to dpdk > > > > > > echo 1 > /sys/bus/pci/devices/0000\:ca\:00.0/sriov_ numvfs > > > > > > ./usertools/dpdk-devbind. py -b vfio-pci 0000:ca:01.0 2. launch > > > > > > testpmd ./x86_ 64-native-linuxapp-clang/app/dpdk-testpmd -l > > > > > > 28-48 -n 4 -a 0000:ca:01.0 --file-prefix=dpdk_ 525342_ > > > > > > 20221104042659 -- -i > > > > > > --rxq=256 --txq=256 > > > > > > --total-num-mbufs=500000 > > > > > > > > > > > > Parameter Description: > > > > > > "-l 28-48":The range of parameter values after "-l" must be on > > > > > > "NUMA > > > > > node1 CPU(S)" > > > > > > "0000:ca:01.0":inset on node1 > > > > > - Back to your topic. > > > > > Can you try this simple hack: > > > > > > > > > > diff --git a/lib/eal/common/eal_common_thread.c > > > > > b/lib/eal/common/eal_common_thread.c > > > > > index c5d8b4327d..92160c7fa6 100644 > > > > > --- a/lib/eal/common/eal_common_thread.c > > > > > +++ b/lib/eal/common/eal_common_thread.c > > > > > @@ -253,6 +253,7 @@ static void *ctrl_thread_init(void *arg) > > > > > void *routine_arg = params->arg; > > > > > > > > > > __rte_thread_init(rte_lcore_id(), cpuset); > > > > > + RTE_PER_LCORE(_socket_id) = SOCKET_ID_ANY; > > > > > params->ret = pthread_setaffinity_np(pthread_self(), > > > sizeof(*cpuset), > > > > > cpuset); > > > > > if (params->ret != 0) { > > > > > > > > > Thanks for your advice. > > > > > > > > But this issue still exists after I tried. > > > > > > Ok, I think I understand what is wrong... but I am still guessing as I > > > am not sure what your "issue" is. > > > Can you have a try with: > > > https://patchwork.dpdk.org/project/dpdk/patch/20221221104858.296530- > > 1- > > > david.march...@redhat.com/ > > > > > > Thanks. > > > > > I think this issue is similar to the description in the patch you gave me. > > > > when the DPDK application is started only on one numa node, Interrupt > > thread find memory on another numa node. This leads to a whole set of > > memory allocation/release operations every time when "rte_malloc" is called. > > This is the root cause of this issue. > > > > This issue can be solved after I tried. > > Thanks for your advice. > > After further testing in a different environment, we found the issue still > existed in your last patch. After troubleshooting, it is found that in the > "malloc_get_numa_socket()" API, if the return value of "rte_socket_id()" > is "SOCKET_ID_ANY" (- 1), the API will return > "rte_lcore_to_socket_id (rte_get_main_lcore())"; > Otherwise, "malloc_get_numa_socket()" API will directly return > "the return value of rte_socket_id()",in this case, the issue cannot be > solved. > > And the return value of "rte_socket_id()" is modified by the solution you > suggested in your last email (RTE_PER_LCORE (_socket_id)=SOCKET_ ID_ ANY;). > Therefore, I think merging your two suggestions together could completely > solve this issue. > > Can you please update your accordingly?
Please try the last revision and report back. https://patchwork.dpdk.org/project/dpdk/list/?series=26362 Thanks. -- David Marchand