Hi Vadim, On 03/07/2015 12:24 AM, Vadim Suraev wrote: > Hi, Olivier, > I realized that if local cache for the mempool is enabled and greater > than 0, > if, say, the mempool size is X and local cache length is Y (and it is > not empty,Y>0) > an attempt to allocate a bulk, whose size is greater than local cache > size (max) and greater than X-Y (which is the number of entries in the > ring) will fail. > The reason is: > __mempool_get_bulk will check whether the bulk to be allocated is > greater than mp->cache_size and will fall to ring_dequeue. > And the ring does not contain enough entries in this case while the sum > of ring entries + cache length may be greater or equal to the bulk's > size, so theoretically the bulk could be allocated. > Is it an expected behaviour? Am I missing something?
I think it's the expected behavior as the code of mempool_get() tries to minimize the number of tests. In this situation, even if len(mempool) + len(cache) is greater than the number of requested objects, we are almost out of buffers, so returning ENOBUF is not a problem. If the user wants to ensure that he can allocates at least X buffers, he can create the pool with: mempool_create(X + cache_size * MAX_LCORE) > By the way, rte_mempool_count returns a ring count + sum of all local > caches, IMHO it could mislead, even twice. Right, today rte_mempool_count() cannot really be used for something else than debug or stats. Adding rte_mempool_common_count() and rte_mempool_cache_len() may be useful to give the user a better control (and they will be faster because they won't browse the cache lengths of all lcores). But we have to keep in mind that for multi-consumer pools checking the common_count before retrieving objects is useless because the other lcores can retrieve objects at the same time. Regards, Olivier