On 06-Nov-19 7:37 AM, David Marchand wrote:
On Wed, Nov 6, 2019 at 7:16 AM Wangyu (Eric) <seven.wan...@huawei.com> wrote:
In 64K pagesize system, DPDK will read the size NIC need in
uio/uio1/maps/map1/size, when the size small than pagesize(e.g.,82599 is 16K),
dev->mem_resource[i].len will be 16K, but the mmap function applies for at
least 1 page size, which is 64K.
Then second NIC mmap, start address is first NIC address + 16K, which already
used by first NIC.
Do you see this issue with vfio?
So if change the size to first NIC address + 64K, problem solved.
You are hacking a description of the device resources to workaround a problem.
This patch is a no go for me.
Maybe there is something to do with the hint passed to mmap in uio case.
Adding Anatoly to the loop.
I did a quick code inspection for VFIO and UIO. We do the same thing in
both, so both code paths can be for all intents and purposes considered
equivalent.
To reserve mappings for addresses, we start at some arbitrary address
(find_max_va_end()), and start mapping from there. Then, we do an mmap()
*and overwrite* whatever address we expected to get, and then the next
address is (current.addr + current.len).
The mmap() is called without MAP_FIXED, so we get an address the kernel
feels comfortable for us to get. Meaning, even if the initial address
hint was not page-aligned, the return value from mmap() will be
page-aligned. It seems to me that your platform/kernel does not do that,
and allows mmap() to return page-unaligned addresses. I would strongly
suggest checking the mmap() return address on your platform (in either
UIO or VFIO - they both do it about the same way).
We could work around that by doing (next_addr =
RTE_PTR_ALIGN(current.addr + current.len, pagesize)), but to me it seems
like a bug in your kernel/mmap() implementation. This is an easy fix
though, and i'm sure we can put in a workaround like i described.
--
Thanks,
Anatoly