On 05-Mar-21 3:50 PM, Nithin Dabilpuram wrote:
On Fri, Mar 05, 2021 at 01:54:34PM +0000, Burakov, Anatoly wrote:
On 05-Mar-21 7:50 AM, David Marchand wrote:
On Fri, Jan 15, 2021 at 8:33 AM Nithin Dabilpuram
<ndabilpu...@marvell.com> wrote:

In order to save DMA entries limited by kernel both for externel
memory and hugepage memory, an attempt was made to map physically
contiguous memory in one go. This cannot be done as VFIO IOMMU type1
does not support partially unmapping a previously mapped memory
region while Heap can request for multi page mapping and
partial unmapping.
Hence for going back to old method of mapping/unmapping at
memseg granularity, this commit reverts
commit d1c7c0cdf7ba ("vfio: map contiguous areas in one go")

Also add documentation on what module parameter needs to be used
to increase the per-container dma map limit for VFIO.

Fixes: d1c7c0cdf7ba ("vfio: map contiguous areas in one go")
Cc: anatoly.bura...@intel.com
Cc: sta...@dpdk.org

Signed-off-by: Nithin Dabilpuram <ndabilpu...@marvell.com>
Acked-by: Anatoly Burakov <anatoly.bura...@intel.com>
Acked-by: David Christensen <d...@linux.vnet.ibm.com>

There is a regression reported in bz: https://bugs.dpdk.org/show_bug.cgi?id=649

I assigned it to Anatoly for now.
Nithin, can you have a loo too?

Thanks.



I've responded on the bug tracker as well, but to repeat it here: this is
not a regression, this is intended behavior. We cannot do anything about
this.

To add, for test case to pass, either limits have to be increased, or use 
"--mp-alloc=xmemhuge"
instead of "--mp-alloc=xmem" which is forcing system page size or reduce total 
mbuf count
to reduce page count.


Technically, one is not a replacement for the other, so the correct way to handle it is to increase the limits, not using xmemhuge.

--
Thanks,
Anatoly

Reply via email to