> -----Original Message----- > From: Richardson, Bruce > Sent: Monday, September 29, 2014 1:34 PM > To: Ananyev, Konstantin > Cc: Wiles, Roger Keith (Wind River); <dev at dpdk.org> > Subject: Re: [dpdk-dev] [RFC] More changes for > rte_mempool.h:__mempool_get_bulk() > > On Mon, Sep 29, 2014 at 01:25:11PM +0100, Ananyev, Konstantin wrote: > > > > > > > -----Original Message----- > > > From: Richardson, Bruce > > > Sent: Monday, September 29, 2014 1:06 PM > > > To: Wiles, Roger Keith (Wind River) > > > Cc: Ananyev, Konstantin; <dev at dpdk.org> > > > Subject: Re: [dpdk-dev] [RFC] More changes for > > > rte_mempool.h:__mempool_get_bulk() > > > > > > On Sun, Sep 28, 2014 at 11:17:34PM +0000, Wiles, Roger Keith wrote: > > > > > > > > On Sep 28, 2014, at 5:41 PM, Ananyev, Konstantin <konstantin.ananyev at > > > > intel.com> wrote: > > > > > > > > > > > > > > > > > > >> -----Original Message----- > > > > >> From: dev [mailto:dev-bounces at dpdk.org] On Behalf Of Wiles, Roger > > > > >> Keith > > > > >> Sent: Sunday, September 28, 2014 6:52 PM > > > > >> To: <dev at dpdk.org> > > > > >> Subject: [dpdk-dev] [RFC] More changes for > > > > >> rte_mempool.h:__mempool_get_bulk() > > > > >> > > > > >> Here is a Request for Comment on __mempool_get_bulk() routine. I > > > > >> believe I am seeing a few more issues in this routine, > please > > > look > > > > >> at the code below and see if these seem to fix some concerns in how > > > > >> the ring is handled. > > > > >> > > > > >> The first issue I believe is cache->len is increased by ret and not > > > > >> req as we do not know if ret == req. This also means the > cache- > > > >len > > > > >> may still not satisfy the request from the cache. > > > > >> > > > > >> The second issue is if you believe the above code then we have to > > > > >> account for that issue in the stats. > > > > >> > > > > >> Let me know what you think? > > > > >> ++Keith > > > > >> --- > > > > >> > > > > >> diff --git a/lib/librte_mempool/rte_mempool.h > > > > >> b/lib/librte_mempool/rte_mempool.h > > > > >> index 199a493..b1b1f7a 100644 > > > > >> --- a/lib/librte_mempool/rte_mempool.h > > > > >> +++ b/lib/librte_mempool/rte_mempool.h > > > > >> @@ -945,9 +945,7 @@ __mempool_get_bulk(struct rte_mempool *mp, void > > > > >> **obj_table, > > > > >> unsigned n, int is_mc) > > > > >> { > > > > >> int ret; > > > > >> -#ifdef RTE_LIBRTE_MEMPOOL_DEBUG > > > > >> - unsigned n_orig = n; > > > > >> -#endif > > > > > > > > > > Yep, as I said in my previous mail n_orig could be removed in total. > > > > > Though from other side - it is harmless. > > > > > > > > > >> + > > > > >> #if RTE_MEMPOOL_CACHE_MAX_SIZE > 0 > > > > >> struct rte_mempool_cache *cache; > > > > >> uint32_t index, len; > > > > >> @@ -979,7 +977,21 @@ __mempool_get_bulk(struct rte_mempool *mp, void > > > > >> **obj_table, > > > > >> goto ring_dequeue; > > > > >> } > > > > >> > > > > >> - cache->len += req; > > > > >> + cache->len += ret; // Need to adjust len by ret > > > > >> not req, as (ret != req) > > > > >> + > > > > > > > > > > rte_ring_mc_dequeue_bulk(.., req) at line 971, would either get all > > > > > req objects from the ring and return 0 (success), > > > > > or wouldn't get any entry from the ring and return negative value > > > > > (failure). > > > > > So this change is erroneous. > > > > > > > > Sorry, I combined my thoughts on changing the get_bulk behavior and you > > > > would be correct for the current design. This is why I > > > decided to make it an RFC :-) > > > > > > > > > >> + if ( cache->len < n ) { > > > > > > > > > > If n > cache_size, then we will go straight to 'ring_dequeue' see > > > > > line 959. > > > > > So no need for that check here. > > > > > > > > My thinking (at the time) was get_bulk should return ?n? instead of > > > > zero, which I feel is the better coding. You are correct it does > not > > > make sense unless you factor in my thinking at time :-( > > > > > > > > > >> + /* > > > > >> + * Number (ret + cache->len) may not be >= > > > > >> n. As > > > > >> + * the 'ret' value maybe zero or less then > > > > >> 'req'. > > > > >> + * > > > > >> + * Note: > > > > >> + * An issue of order from the cache and > > > > >> common pool could > > > > >> + * be an issue if (cache->len != 0 and less > > > > >> then n), but the > > > > >> + * normal case it should be OK. If the user > > > > >> needs to preserve > > > > >> + * the order of packets then he must set > > > > >> cache_size == 0. > > > > >> + */ > > > > >> + goto ring_dequeue; > > > > >> + } > > > > >> } > > > > >> > > > > >> /* Now fill in the response ... */ > > > > >> @@ -1002,9 +1014,12 @@ ring_dequeue: > > > > >> ret = rte_ring_sc_dequeue_bulk(mp->ring, obj_table, > > > > >> n); > > > > >> > > > > >> if (ret < 0) > > > > >> - __MEMPOOL_STAT_ADD(mp, get_fail, n_orig); > > > > >> - else > > > > >> + __MEMPOOL_STAT_ADD(mp, get_fail, n); > > > > >> + else { > > > > >> __MEMPOOL_STAT_ADD(mp, get_success, ret); > > > > >> + // Catch the case when ret != n, adding zero should > > > > >> not be a problem. > > > > >> + __MEMPOOL_STAT_ADD(mp, get_fail, n - ret); > > > > > > > > > > As I said above, ret == 0 on success, so need for that change. > > > > > Just n (or n_orig) is ok here. > > > > > > > > > >> + } > > > > >> > > > > >> return ret; > > > > >> } > > > > >> > > > > >> Keith Wiles, Principal Technologist with CTO office, Wind River > > > > >> mobile 972-213-5533 > > > > > > > > Do we think it is worth it to change the behavior of get_bulk returning > > > > ?n? instead of zero on success? It would remove a few test > > > IMO in a couple of places. We could also return <0 on the zero case as > > > well, just to make sure code did not try to follow the success > > > case by mistake. > > > > > > If you want to have such a function, i think it should align with the > > > functions on the rings. In this case, this would mean having a get_burst > > > function, which returns less than or equal to the number of elements > > > requested. I would not change the behaviour of the existing function > > > without also changing the rings "bulk" function to match. > > > > Do you mean mempool_get_burst() that could return less number of objects > > then you requested? > > If so, I wonder what will be the usage model for it? > > To me it sounds a bit strange - like malloc() (or mmap) that could allocate > > to you only part of the memory you requested? > > > > I would expect it to be as useful as the rings versions. :-)
My understanding is that ring_*_burst() functions are mainly used for the passing packets between different threads/processes (sort of IPC). For that case burst() seems much better choice, then bulk(). > I don't actually see a problem with it. For example: if you have 10 messages > you want to create and send, I see no reason why, if there were only 8 > buffers free, you can't create and send 8 and then try again for the last 2 > once you were finished with those 8. The whole idea of partially successful memory allocation still sounds a bit strange to me. But if you believe there is a use case for it - I am ok with it :) Konstantin > That being said, the reason for suggesting that behaviour was to align with > the rings APIs. If we are looking at breaking backward compatibility (for > both rings and mempools) other options can be considered too. > > /Bruce > > > Konstantin > > > > > > > /Bruce > > > > > > > > > > > > > NACK in summary. > > > > > Konstantin > > > > > > > > Keith Wiles, Principal Technologist with CTO office, Wind River mobile > > > > 972-213-5533 > > > >