On 29-Oct-18 11:39 AM, Alejandro Lucero wrote:
I got a patch that solves a bug when calling rte_eal_dma_mask using the mask instead of the maskbits. However, this does not solves the deadlock.

Interestingly, the problem looks like a compiler one. Calling rte_memseg_walk does not return when calling inside rt_eal_dma_mask, but if you modify the call like this:

*diff --git a/lib/librte_eal/common/eal_common_memory.c b/lib/librte_eal/common/eal_common_memory.c*

*index 12dcedf5c..69b26e464 100644*

*--- a/lib/librte_eal/common/eal_common_memory.c*

*+++ b/lib/librte_eal/common/eal_common_memory.c*

@@ -462,7 +462,7 @@rte_eal_check_dma_mask(uint8_t maskbits)

/* create dma mask */

mask = ~((1ULL << maskbits) - 1);

- if (rte_memseg_walk(check_iova, &mask))

+if (!rte_memseg_walk(check_iova, &mask))

/*

* Dma mask precludes hugepage usage.

* This device can not be used and we do not need to keep


it works, although the value returned to the invoker changes, of course. But the point here is it should be the same behaviour when calling rte_memseg_walk than before and it is not.


Anatoly, maybe you can see something I can not.


memseg walk will return 0 only when each callback returned 0 and there were no more segments left to call callbacks on. If your code always returns 0, then return value of memseg_walk will always be zero.

If your code returns 1 or -1 in some cases, then this error condition will trigger. If it doesn't, then your condition by which you decide to return 1 or 0, is incorrect :) I couldn't spot any obvious issues there, but i'll recheck.

--
Thanks,
Anatoly

Reply via email to