> -----Original Message----- > From: Ferruh Yigit <ferruh.yi...@intel.com> > Sent: Friday, November 8, 2019 1:23 AM > To: Vamsi Krishna Attunuru <vattun...@marvell.com>; dev@dpdk.org > Cc: tho...@monjalon.net; Jerin Jacob Kollanukkaran <jer...@marvell.com>; > Kiran Kumar Kokkilagadda <kirankum...@marvell.com>; > olivier.m...@6wind.com; anatoly.bura...@intel.com; > arybche...@solarflare.com; step...@networkplumber.org > Subject: [EXT] Re: [dpdk-dev] [PATCH v12 0/2] add IOVA=VA mode support > > External Email > > ---------------------------------------------------------------------- > On 11/5/2019 11:04 AM, vattun...@marvell.com wrote: > > > From: Vamsi Attunuru <vattun...@marvell.com> > > > > > > --- > > > V12 Changes: > > > * Removed previously added `--legacy-kni` eal option. > > > * Removed previously added kni specific mempool create routines > > > and mempool populate routines. > > > > > > This patch set(V12) is dependent on following patch set, since the mempool > > > related support to enable KNI in IOVA=VA mode is taken care in below > > > patchset. > > > > > > https://urldefense.proofpoint.com/v2/url?u=https- > 3A__patchwork.dpdk.org_cover_62376_&d=DwIDaQ&c=nKjWec2b6R0mOyPaz7 > xtfQ&r=WllrYaumVkxaWjgKto6E_rtDQshhIhik2jkvzFyRhW8&m=sEREtIZWQwtHJ > CxXRRv8euBGdr5q6K4L8Kz7PI25QcY&s=5Rz1dbSeIWt56cCtiwR6pfGprEFUumtcd > 34TmF3sjs4&e= > > > > Hi Vasim, Jerin, > > > > Overall looks good and I not getting any functional error but I am observing a > > huge performance drop with this update, 3.8Mpps to 0.7Mpps [1].
Hi Ferruh, When it comes to actual kernel netdev test cases like iperf or any other use cases, there would not be any impact on performance. I think synthetic test case like loopback mode might not be the actual test case alone to depend on when the kernel module is featured to work with kind of devices(pdev or vdev). Users can always fallback to pa mode with cmd line option. Please suggest your thoughts on considering what test case to use & evaluate the performance difference. > > > > I don't know really what to do, I think we need to give a decision as > community, > > and even we go with the patch we should document this performance drop > clearly > > and document how to mitigate it. > > > > > > > > [1] > > This is with kni sample application, > > a) IOVA=VA mode selected > > ./examples/kni/build/kni -l0,40-47 --log-level=*:debug -- -p 0x3 -P --config > > "(0,44,45,40),(1,46,47,41)" > > > > forwarding performance is around 0.7Mpps and 'kni_single' kernel thread > consumes > > all cpu. > > > > b) IOVA=PA mode forced > > ./examples/kni/build/kni -l0,40-47 --log-level=*:debug --iova=pa -- -p 0x3 -P > > --config "(0,44,45,40),(1,46,47,41)" > > > > forwarding performance is around 3.8Mpps and 'kni_single' core utilization is > ~80%. > > > > I am on 5.1.20-300.fc30.x86_64 kernel. > > kni module inserted as: "insmod ./build/kmod/rte_kni.ko > lo_mode=lo_mode_fifo" > > > > > > > > V11 Changes: > > > * Added iova to kva address translation routines in kernel module to > > > make it work in iova=va mode which enables DPDK to create kni devices > > > on any kind of backed device/memory. > > > * Added ``--legacy-kni`` eal option to make existing KNI applications > > > work with DPDK 19.11 and later versions. > > > * Removed previously added pci device info from kni device info struct. > > > > > > V10 Changes: > > > * Fixed function return code on failure when min_chunk_size > pg_sz. > > > * Marked new mempool populate routine as EXPERIMENTAL. > > > > > > V9 Changes: > > > * Used rte_mempool_ops_calc_mem_size() instead of default handler in the > > > new mempool populate routine. > > > * Check min_chunk_size and return values. > > > * Removed ethdev_info memset to '0' and moved pci dev_info populate into > > > kni_dev_pci_addr_get() routine. > > > * Addressed misc. review comments. > > > > > > V8 Changes: > > > * Remove default mempool populate() routine changes. > > > * Add kni app specific mempool create & free routines. > > > * Add new mempool populate routine to allocate page-aligned memzones > > > with page size to make sure all mempool objects reside on a page. > > > * Update release notes and map files. > > > > > > V7 Changes: > > > * Removed previously proposed mempool flag and made those page > > > boundary checks default in mempool populate() except for the objects size > > > bigger than the size of page. > > > * Removed KNI example application related changes since pool related > > > requirement is taken care in mempool lib. > > > * All PCI dev related info is moved under rte_eal_iova_mode() == VA check. > > > * Added wrapper functions in KNI module to hide IOVA checks and make > > > address translation routines more readable. > > > * Updated IOVA mode checks that enforcing IOVA=PA mode when IOVA=VA > > > mode is enabled. > > > > > > V6 Changes: > > > * Added new mempool flag to ensure mbuf memory is not scattered across > > > page boundaries. > > > * Added KNI kernel module required PCI device information. > > > * Modified KNI example application to create mempool with new mempool > > > flag. > > > > > > V5 changes: > > > * Fixed build issue with 32b build > > > > > > V4 changes: > > > * Fixed build issues with older kernel versions > > > * This approach will only work with kernel above 4.4.0 > > > > > > V3 Changes: > > > * Add new approach to work kni with IOVA=VA mode using > > > iommu_iova_to_phys API. > > > > > > Vamsi Attunuru (2): > > > kni: add IOVA=VA mode support > > > kni: add IOVA=VA support in kernel module > > > > > > doc/guides/prog_guide/kernel_nic_interface.rst | 9 ++++ > > > doc/guides/rel_notes/release_19_11.rst | 5 ++ > > > kernel/linux/kni/compat.h | 15 ++++++ > > > kernel/linux/kni/kni_dev.h | 42 +++++++++++++++ > > > kernel/linux/kni/kni_misc.c | 39 ++++++++++---- > > > kernel/linux/kni/kni_net.c | 62 > > ++++++++++++++++++----- > > > lib/librte_eal/linux/eal/eal.c | 29 ++++++----- > > > lib/librte_eal/linux/eal/include/rte_kni_common.h | 1 + > > > lib/librte_kni/rte_kni.c | 7 +-- > > > 9 files changed, 170 insertions(+), 39 deletions(-) > > > > >