On 12-Jul-19 11:26 AM, Jerin Jacob Kollanukkaran wrote:
-----Original Message-----
From: Burakov, Anatoly <anatoly.bura...@intel.com>
Sent: Friday, July 12, 2019 3:28 PM
To: Jerin Jacob Kollanukkaran <jer...@marvell.com>; Ferruh Yigit
<ferruh.yi...@intel.com>; Vamsi Krishna Attunuru
<vattun...@marvell.com>; dev@dpdk.org
Cc: olivier.m...@6wind.com; arybche...@solarflare.com
Subject: [EXT] Re: [dpdk-dev] [PATCH v6 0/4] add IOVA = VA support in KNI
External Email
----------------------------------------------------------------------
On 12-Jul-19 10:17 AM, Jerin Jacob Kollanukkaran wrote:
-----Original Message-----
From: Ferruh Yigit <ferruh.yi...@intel.com>
Sent: Thursday, July 11, 2019 9:52 PM
To: Jerin Jacob Kollanukkaran <jer...@marvell.com>; Vamsi Krishna
Attunuru <vattun...@marvell.com>; dev@dpdk.org
Cc: olivier.m...@6wind.com; arybche...@solarflare.com; Burakov,
Anatoly <anatoly.bura...@intel.com>
Subject: [EXT] Re: [dpdk-dev] [PATCH v6 0/4] add IOVA = VA support in
KNI
External Email
---------------------------------------------------------------------
- On 7/4/2019 10:48 AM, Jerin Jacob Kollanukkaran wrote:
From: Vamsi Krishna Attunuru
Sent: Thursday, July 4, 2019 12:13 PM
To: dev@dpdk.org
Cc: ferruh.yi...@intel.com; olivier.m...@6wind.com;
arybche...@solarflare.com; Jerin Jacob Kollanukkaran
<jer...@marvell.com>; Burakov, Anatoly <anatoly.bura...@intel.com>
Subject: Re: [dpdk-dev] [PATCH v6 0/4] add IOVA = VA support in KNI
Hi All,
Just to summarize, below items have arisen from the initial review.
1) Can the new mempool flag be made default to all the pools and
will
there be case that new flag functionality would fail for some page sizes.?
If the minimum huge page size is 2MB and normal huge page size is
512MB or 1G. So I think, new flags can be default as skipping the
page
boundaries for Mempool objects has nearly zero overhead. But I leave
decision to maintainers.
2) Adding HW device info(pci dev info) to KNI device structure,
will it
break KNI on virtual devices in VA or PA mode.?
Iommu_domain will be created only for PCI devices and the system
runs in IOVA_VA mode. Virtual devices(IOVA_DC(don't care) or
IOVA_PA
devices still it works without PCI device structure)
It is a useful feature where KNI can run without root privilege and
it is pending for long time. Request to review and close this
I support the idea to remove 'kni' forcing to the IOVA=PA mode, but
also not sure about forcing all KNI users to update their code to
allocate mempool in a very specific way.
What about giving more control to the user on this?
Any user want to use IOVA=VA and KNI together can update application
to justify memory allocation of the KNI and give an explicit "kni
iova_mode=1"
config.
Where this config comes, eal or kni sample app or KNI public API?
Who want to use existing KNI implementation can continue to use it
with IOVA=PA mode which is current case, or for this case user may
need to force the DPDK application to IOVA=PA but at least there is a
workaround.
And kni sample application should have sample for both case, although
this increases the testing and maintenance cost, I hope we can get
support from you on the iova_mode=1 usecase.
What do you think?
IMO, If possible we can avoid extra indirection of new config. In
worst case We can add it. How about following to not have new config
1) Make MEMPOOL_F_NO_PAGE_BOUND as default
http://patches.dpdk.org/patch/55277/
There is absolutely zero overhead of this flag considering the huge
page size are minimum 2MB. Typically 512MB or 1GB.
Any one has any objection?
Pretty much zero overhead in hugepage case, not so in non-hugepage case.
It's rare, but since we support it, we have to account for it.
That is a fair concern.
How about enable the flag in mempool ONLY when rte_eal_has_hugepages()
In the common layer?
Perhaps it's better to check page size of the underlying memory, because
4K pages are not necessarily no-huge mode - they could also be external
memory. That's going to be a bit hard because there may not be a way to
know which memory we're allocating from in advance, aside from simple
checks like `(rte_eal_has_hugepages() ||
rte_malloc_heap_socket_is_external(socket_id))` - but maybe those would
be sufficient.
(also, i don't really like the name NO_PAGE_BOUND since in memzone API
there's a "bounded memzone" allocation API, and this flag's name reads like
objects would not be bounded by page size, not that they won't cross page
boundary)
No strong opinion for the name. What name you suggest?
How about something like MEMPOOL_F_NO_PAGE_SPLIT?
2) Introduce rte_kni_mempool_create() API in kni lib to abstract the
Mempool requirement for KNI. This will enable portable KNI applications.
This means that using KNI is not a drop-in replacement for any other
PMD. If maintainers of KNI are OK with this then sure :)
The PMD don’t have any dependency on NO_PAGE_BOUND flag. Right?
If KNI app is using rte_kni_mempool_create() to create the mempool,
In what case do you see problem with specific PMD?
I'm not saying the PMD's have a dependency on the flag, i'm saying that
the same code cannot be used with and without KNI because you need to
call a separate API for mempool creation if you want to use it with KNI.
For KNI, the underlying memory must abide by certain constraints that
are not there for other PMD's, so either you fix all memory to these
constraints, or you lose the ability to reuse the code with other PMD's
as is.
That is, unless i'm grossly misunderstanding what you're suggesting here :)
--
Thanks,
Anatoly
--
Thanks,
Anatoly