On Mon, Jan 23, 2023 at 01:23:50PM +0100, Morten Brørup wrote: > > From: Konstantin Ananyev [mailto:konstantin.anan...@huawei.com] > > Sent: Monday, 23 January 2023 12.54 > > > > > > Few nits, see below. > > > > Also I still think we do need a test case for _zc_get_ before > > > > accepting it in the mainline. > > > > > > Poking at my bad conscience... :-) > > > > > > It's on my todo-list. Apparently not high enough. ;-) > > > > > > > With that in place: > > > > Acked-by: Konstantin Ananyev <konstantin.v.anan...@yandex.ru> > > > > > > [...] > > > > > > +/** > > > > > + * @warning > > > > > + * @b EXPERIMENTAL: This API may change, or be removed, without > > > > prior notice. > > > > > + * > > > > > + * Zero-copy put objects in a user-owned mempool cache backed by > > the > > > > specified mempool. > > > > > + * > > > > > + * @param cache > > > > > + * A pointer to the mempool cache. > > > > > + * @param mp > > > > > + * A pointer to the mempool. > > > > > + * @param n > > > > > + * The number of objects to be put in the mempool cache. > > > > > + * @return > > > > > + * The pointer to where to put the objects in the mempool > > cache. > > > > > + * NULL if the request itself is too big for the cache, i.e. > > > > > + * exceeds the cache flush threshold. > > > > > + */ > > > > > +__rte_experimental > > > > > +static __rte_always_inline void ** > > > > > +rte_mempool_cache_zc_put_bulk(struct rte_mempool_cache *cache, > > > > > + struct rte_mempool *mp, > > > > > + unsigned int n) > > > > > +{ > > > > > + RTE_ASSERT(cache != NULL); > > > > > + RTE_ASSERT(mp != NULL); > > > > > + > > > > > + rte_mempool_trace_cache_zc_put_bulk(cache, mp, n); > > > > > + return __rte_mempool_cache_zc_put_bulk(cache, mp, n); > > > > > +} > > > > > + > > > > > +/** > > > > > + * @warning > > > > > + * @b EXPERIMENTAL: This API may change, or be removed, without > > > > prior notice. > > > > > + * > > > > > + * Zero-copy un-put objects in a user-owned mempool cache. > > > > > + * > > > > > + * @param cache > > > > > + * A pointer to the mempool cache. > > > > > + * @param n > > > > > + * The number of objects not put in the mempool cache after > > > > calling > > > > > + * rte_mempool_cache_zc_put_bulk(). > > > > > + */ > > > > > +__rte_experimental > > > > > +static __rte_always_inline void > > > > > +rte_mempool_cache_zc_put_rewind(struct rte_mempool_cache *cache, > > > > > + unsigned int n) > > > > > +{ > > > > > + RTE_ASSERT(cache != NULL); > > > > > + RTE_ASSERT(n <= cache->len); > > > > > + > > > > > + rte_mempool_trace_cache_zc_put_rewind(cache, n); > > > > > + > > > > > + cache->len -= n; > > > > > + > > > > > + RTE_MEMPOOL_CACHE_STAT_ADD(cache, put_objs, (int)-n); > > > > > +} > > > > > + > > > > > +/** > > > > > + * @warning > > > > > + * @b EXPERIMENTAL: This API may change, or be removed, without > > > > prior notice. > > > > > + * > > > > > + * Zero-copy get objects from a user-owned mempool cache backed > > by > > > > the specified mempool. > > > > > + * > > > > > + * @param cache > > > > > + * A pointer to the mempool cache. > > > > > + * @param mp > > > > > + * A pointer to the mempool. > > > > > + * @param n > > > > > + * The number of objects to prefetch into the mempool cache. > > > > > > > > Why not 'get' instead of 'prefetch'? > > > > > > This was my thinking: > > > > > > The function "prefetches" the objects into the cache. It is the > > application itself that "gets" the objects from the cache after having > > > called the function. > > > You might also notice that the n parameter for the zc_put() function > > is described as "to be put" (future), not "to put" (now) in the > > > cache. > > > > > > On the other hand, I chose "Zero-copy get" for the function headline > > to keep it simple. > > > > > > If you think "get" is a more correct description of the n parameter, > > I can change it. > > > > > > Alternatively, I can use the same style as zc_put(), i.e. "to be > > gotten from the mempool cache" - but that would require input from a > > > natively English speaking person, because Danish and English grammar > > is very different, and I am highly uncertain about my English > > > grammar here! I originally considered this phrase, but concluded that > > the "prefetch" description was easier to understand - especially > > > for non-native English readers. > > > > For me 'prefetch' seems a bit unclear in that situation... > > Probably: "number of objects that user plans to extract from the > > cache"? > > But again, I am not native English speaker too, so might be someone can > > suggest a better option. > > > > @Bruce (or any other native English speaking person), your input would be > appreciated here! > I was happily ignoring this thread until you went and dragged me in with a hard question. :-)
I think the longer explanation the clearer it is likely to be. How about "number of objects to be made available for extraction from the cache"? I don't like the reference to "the user" in the longer suggestion above, but otherwise consider it clearer that talking of prefetching or "getting". My 2c. /Bruce