> From: wangyunjian <wangyunj...@huawei.com>
[...]
> > From: Dmitry Kozlyuk [mailto:dkozl...@nvidia.com]
[...]
> > Thanks for attaching all the details.
> > Can you please reproduce it with --log-level=pmd.common.mlx5:debug and
> > send the logs?
> >
> > > For example, if the environment is configured with 10GB hugepages but
> > > each hugepage is physically discontinuous, this problem can be
> > > reproduced.
> 
> # ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -c 0xFC0 --iova-mode pa -- 
> legacy-mem -a af:00.0 -a af:00.1 --log-level=pmd.common.mlx5:debug -m 0,8192 
> -- -a -i --forward-mode=fwd --rxq=2 --txq=2   --total-num-mbufs=1000000
[...]
> mlx5_common: Collecting chunks of regular mempool mb_pool_0
> mlx5_common: Created a new MR 0x92827 in PD 0x4864ab0 for address range 
> [0x75cb6c000, 0x780000000] (592003072 bytes) for mempool mb_pool_0
> mlx5_common: Created a new MR 0x93528 in PD 0x4864ab0 for address range 
> [0x7dcb6c000, 0x800000000] (592003072 bytes) for mempool mb_pool_0
> mlx5_common: Created a new MR 0x94529 in PD 0x4864ab0 for address range 
> [0x85cb6c000, 0x880000000] (592003072 bytes) for mempool mb_pool_0
> mlx5_common: Created a new MR 0x9562a in PD 0x4864ab0 for address range 
> [0x8d6cca000, 0x8fa15e000] (592003072 bytes) for mempool mb_pool_0

Thanks for the logs, UUIC they are from a successful run.
I have reproduced an equivalent hugepage layout
and mempool spread between hugepages,
but I don't see the error behavior in several tries.
What are the logs in case of error?
Please note that the offending commit you found (fec28ca0e3a9)
indeed introduced a few issues, but they were fixed later,
so I'm testing with 21.11, not that commit.
Unfortunately, none of those issues resembled yours.

Reply via email to