On 10/16/2019 1:17 PM, Vamsi Krishna Attunuru wrote:
> 
> 
>> -----Original Message-----
>> From: dev <dev-boun...@dpdk.org> On Behalf Of Yigit, Ferruh
>> Sent: Tuesday, October 15, 2019 9:05 PM
>> To: Vamsi Krishna Attunuru <vattun...@marvell.com>; dev@dpdk.org
>> Cc: tho...@monjalon.net; Jerin Jacob Kollanukkaran <jer...@marvell.com>;
>> olivier.m...@6wind.com; ferruh.yi...@intel.com; anatoly.bura...@intel.com;
>> arybche...@solarflare.com; Kiran Kumar Kokkilagadda
>> <kirankum...@marvell.com>
>> Subject: Re: [dpdk-dev] [PATCH v10 0/5] kni: add IOVA=VA support
>>
>> On 8/16/2019 7:12 AM, vattun...@marvell.com wrote:
>>> From: Vamsi Attunuru <vattun...@marvell.com>
>>>
>>> ---
>>> V10 Changes:
>>> * Fixed function return code on failure when min_chunk_size > pg_sz.
>>> * Marked new mempool populate routine as EXPERIMENTAL.
>>>
>>> V9 Changes:
>>> * Used rte_mempool_ops_calc_mem_size() instead of default handler in
>>> the new mempool populate routine.
>>> * Check min_chunk_size and return values.
>>> * Removed ethdev_info memset to '0' and moved pci dev_info populate
>>> into
>>> kni_dev_pci_addr_get() routine.
>>> * Addressed misc. review comments.
>>>
>>> V8 Changes:
>>> * Remove default mempool populate() routine changes.
>>> * Add kni app specific mempool create & free routines.
>>> * Add new mempool populate routine to allocate page-aligned memzones
>>> with page size to make sure all mempool objects reside on a page.
>>> * Update release notes and map files.
>>>
>>> V7 Changes:
>>> * Removed previously proposed mempool flag and made those page
>>> boundary checks default in mempool populate() except for the objects
>>> size bigger than the size of page.
>>> * Removed KNI example application related changes since pool related
>>> requirement is taken care in mempool lib.
>>> * All PCI dev related info is moved under rte_eal_iova_mode() == VA check.
>>> * Added wrapper functions in KNI module to hide IOVA checks and make
>>> address translation routines more readable.
>>> * Updated IOVA mode checks that enforcing IOVA=PA mode when IOVA=VA
>>> mode is enabled.
>>>
>>> V6 Changes:
>>> * Added new mempool flag to ensure mbuf memory is not scattered across
>>> page boundaries.
>>> * Added KNI kernel module required PCI device information.
>>> * Modified KNI example application to create mempool with new mempool
>>> flag.
>>>
>>> V5 changes:
>>> * Fixed build issue with 32b build
>>>
>>> V4 changes:
>>> * Fixed build issues with older kernel versions
>>> * This approach will only work with kernel above 4.4.0
>>>
>>> V3 Changes:
>>> * Add new approach to work kni with IOVA=VA mode using
>>> iommu_iova_to_phys API.
>>>
>>> Kiran Kumar K (1):
>>>   kni: add IOVA=VA support in KNI module
>>>
>>> Vamsi Attunuru (4):
>>>   mempool: populate mempool with the page sized chunks
>>>   kni: add IOVA=VA support in KNI lib
>>>   kni: add app specific mempool create and free routines
>>>   kni: modify IOVA mode checks to support VA
>>
>> Hi Vamsi,
>>
>> I am aware that this patchset is around for a long time, and I have seen your
>> request to merge in 19.11, but as you can understand the concern I have is to
>> break KNI or existing KNI applications while trying to add this new feature.
>>
>> In high level, there are two issues,
>>
>> 1) kernel modules updates expect there will be a backed device of the KNI 
>> which
>> is not always true:
>>
>>          if (dev_info.iova_mode) {
>>  #ifdef HAVE_IOVA_AS_VA_SUPPORT
>>                  pci = pci_get_device(dev_info.vendor_id,
>>                                       dev_info.device_id, NULL);
>>                  if (pci == NULL) {
>>                          pr_err("pci dev does not exist\n");
>>                          return -ENODEV;
>>                  }
>>
>> For example this breaks:
>> ./build/app/testpmd -w0:0.0 --vdev net_kni0 --vdev net_kni1  -- -i
> 
> Vamsi> Yes, these can be fixed by forcing iommu_mode to PA for
> vdev or vdev&pdev based KNI usecases. 
> 
>>
>>
>> 2) Applications will have to change the API to allocate the mempool.
>> If the user upgraded to new version of DPDK, now it is possible to have 
>> iova=va
>> mode and application should use new KNI API 'rte_kni_pktmbuf_pool_create()'
>> to allocate mempool. And most probably application will have datapath and 
>> will
>> use the KNI only for exception path, will there be any affect using KNI 
>> version of
>> mempool alloc?
> 
> Vamsi> There would not be any affect in using KNI version of mempool.

If you were developing a product, would you rely on a KNI API for pkt_mbuf
allocation? What if there is a problem, will it be as easy as mempool/mbuf API
to figure out and fix? I am not sure about pushing users to this direction?

> 
>>
>>
>> I would like to see KNI is enabled via iova=va mode, but can we have it 
>> limited to
>> a specific command line argument or config option? This increases the test
>> surface but at least old application can continue to work by default, what 
>> do you
>> think?
> 
> Vamsi> Yes, it's appropriate to control the mode to ensure old apps work by 
> default.
> We are fine with having a command line arg or config option to enable KNI in 
> iova=va mode.
> Earlier we thought of having similar approach that also controls mempool 
> allocation using
> a newer mempool flag. After multiple reviews, flag has been discard and added 
> a separate
> mempool populate routine for these usecase. 

I didn't like the idea of having a flag in mempool library, but perhaps we can
have it in KNI scope.

> 
> When command line arg/config option is introduced, functionality will be as 
> below. 
> Please correct me if any cases are missed or not considered.
> Without command:
> Existing KNI is intact, iommu mode will be PA.

+1

> With  command:
> Pdev/vdev's iommu mode is considered and accordingly iova=va/pa is enabled. 
> Application is
> supposed to use KNI version of mempool alloc. 

+1

> I think these mempool quirk will go away when
> Olivier's mempool patchset(RFC) is merged.
> 
>>
>> And I will put a few minor comments to the patches...
>>
> 

Reply via email to