> -----Original Message----- > From: Tal Shnaiderman <tal...@nvidia.com> > Sent: Monday, March 13, 2023 1:18 AM > To: NBU-Contact-Thomas Monjalon (EXTERNAL) <tho...@monjalon.net>; > Stephen Hemminger <step...@networkplumber.org>; Pier Damouny > <pdamo...@nvidia.com> > Cc: dev@dpdk.org; sta...@dpdk.org; ferruh.yi...@amd.com; Singh, Aman > Deep <aman.deep.si...@intel.com>; Zhang, Yuying > <yuying.zh...@intel.com>; Raslan Darawsheh <rasl...@nvidia.com> > Subject: RE: [PATCH v11 0/3] Fix cmdline_poll and testpmd signal handling > > > Subject: Re: [PATCH v11 0/3] Fix cmdline_poll and testpmd signal > > handling > > > > External email: Use caution opening links or attachments > > > > > > 19/02/2023 18:53, Stephen Hemminger: > > > On Fri, 3 Feb 2023 11:14:06 -0800 > > > Stephen Hemminger <step...@networkplumber.org> wrote: > > > > > > > This patchset keeps uncovering bad practices in the cmdline > > > > library around end of file and signal handling. > > > > > > > > Stephen Hemminger (3): > > > > cmdline: make rdline status not private > > > > cmdline: handle EOF in cmdline_poll > > > > testpmd: cleanup cleanly from signal > > > > > > > > app/test-pmd/cmdline.c | 29 +++++-------- > > > > app/test-pmd/testpmd.c | 77 ++++++++++++++++------------------- > > > > app/test-pmd/testpmd.h | 1 + > > > > lib/cmdline/cmdline.c | 11 +++-- > > > > lib/cmdline/cmdline.h | 6 +++ > > > > lib/cmdline/cmdline_private.h | 6 --- > > > > 6 files changed, 62 insertions(+), 68 deletions(-) > > > > > > > > > > Could this please be merged for 23.03? > > > There are Ack's. > > > The only CI failure is a bogus performance test failure. > > > > There was no review from testpmd maintainers. > > > > I've added Cc: sta...@dpdk.org. > > Applied, thanks. > > > Hi, > > Commit "testpmd: cleanup cleanly from signal" from this series breaks > TestPMD's interactive mode on Windows. > > See https://bugs.dpdk.org/show_bug.cgi?id=1180
Hi Stephen, I found an issue based this commit(0fd1386c: app/testpmd: cleanup cleanly from signal). The packets can't loop in 2 testpmd after start dpdk-pdump to capture packets Immediately (less than 1 second). Steps: 1. Bind 1 CBDMA channel to vfio-pci, then start vhost-user as back-end: x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 28-36 -n 4 -a 0000:80:04.0 --file-prefix=vhost \ --vdev 'eth_vhost0,iface=vhost-net0,queues=8,client=1,\ dmas=[txq0@0000:80:04.0;txq1@0000:80:04.0;txq2@0000:80:04.0;txq3@0000:80:04.0;txq4@0000:80:04.0;txq5@0000:80:04.0;rxq2@0000:80:04.0;rxq3@0000:80:04.0;rxq4@0000:80:04.0;rxq5@0000:80:04.0;rxq6@0000:80:04.0;rxq7@0000:80:04.0]' --iova=va -- -i --nb-cores=4 --rxq=8 --txq=8 --txd=1024 --rxd=1024 2. Start virtio-user as front-end: x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 38-42 -n 4 --file-prefix=virtio-user0 --no-pci \ --vdev=net_virtio_user0,mac=00:11:22:33:44:10,path=./vhost-net0,queues=8,mrg_rxbuf=1,in_order=1,packed_vq=1,server=1 \ -- -i --nb-cores=4 --rxq=8 --txq=8 --txd=1024 --rxd=1024 testpmd>set fwd csum testpmd>start 3.Start dpdk-pdump to capture packets: x86_64-native-linuxapp-gcc/app/dpdk-pdump -v --file-prefix=virtio-user0 -- \ --pdump 'device_id=net_virtio_user0,queue=0,rx-dev=/root/dpdk/pdump-rx-q0.pcap,mbuf-size=8000' --pdump \ 'device_id=net_virtio_user0,queue=1,rx-dev=/root/dpdk/pdump-rx-q1.pcap,mbuf-size=8000' 4.Set forwarding mode and send packets from vhost-user(execute this step must immediately, we use the automation script to do, it can be reproduced, and if I add time.sleep(1) before this step, it works well): testpmd>set fwd mac testpmd>set txpkts 64,64,64,2000,2000,2000 testpmd>set burst 1 testpmd>start tx_first 1 testpmd>show port stats 0 And I try to modify the follows code, then re-build DPDK, it works well. Maybe it's not a good method, just for your reference. diff --git a/lib/cmdline/cmdline_os_unix.c b/lib/cmdline/cmdline_os_unix.c index 64a945a34f..ede8289244 100644 --- a/lib/cmdline/cmdline_os_unix.c +++ b/lib/cmdline/cmdline_os_unix.c @@ -37,7 +37,7 @@ cmdline_poll_char(struct cmdline *cl) pfd.events = POLLIN; pfd.revents = 0; - return poll(&pfd, 1, 0); + return poll(&pfd, 1, -1); } ssize_t Regards, Wei Ling