> -----Original Message----- > From: Andrew Rybchenko <arybche...@solarflare.com> > Sent: Monday, July 29, 2019 6:12 PM > To: Vamsi Krishna Attunuru <vattun...@marvell.com>; dev@dpdk.org > Cc: tho...@monjalon.net; Jerin Jacob Kollanukkaran <jer...@marvell.com>; > olivier.m...@6wind.com; ferruh.yi...@intel.com; > anatoly.bura...@intel.com; Kiran Kumar Kokkilagadda > <kirankum...@marvell.com> > Subject: [EXT] Re: [dpdk-dev] [PATCH v9 1/5] mempool: populate mempool > with the page sized chunks of memory > > External Email > > ---------------------------------------------------------------------- > On 7/29/19 3:13 PM, vattun...@marvell.com wrote: > > From: Vamsi Attunuru <vattun...@marvell.com> > > > > Patch adds a routine to populate mempool from page aligned and page > > sized chunks of memory to ensure memory objs do not fall across the > > page boundaries. It's useful for applications that require physically > > contiguous mbuf memory while running in IOVA=VA mode. > > > > Signed-off-by: Vamsi Attunuru <vattun...@marvell.com> > > Signed-off-by: Kiran Kumar K <kirankum...@marvell.com> > > When two below issues fixed: > Acked-by: Andrew Rybchenko <arybche...@solarflare.com> > > As I understand it is likely to be a temporary solution until the problem is > fixed in a generic way. > > > --- > > lib/librte_mempool/rte_mempool.c | 64 > ++++++++++++++++++++++++++++++ > > lib/librte_mempool/rte_mempool.h | 17 ++++++++ > > lib/librte_mempool/rte_mempool_version.map | 1 + > > 3 files changed, 82 insertions(+) > > > > diff --git a/lib/librte_mempool/rte_mempool.c > > b/lib/librte_mempool/rte_mempool.c > > index 7260ce0..00619bd 100644 > > --- a/lib/librte_mempool/rte_mempool.c > > +++ b/lib/librte_mempool/rte_mempool.c > > @@ -414,6 +414,70 @@ rte_mempool_populate_virt(struct rte_mempool > *mp, char *addr, > > return ret; > > } > > > > +/* Function to populate mempool from page sized mem chunks, allocate > > +page size > > + * of memory in memzone and populate them. Return the number of > > +objects added, > > + * or a negative value on error. > > + */ > > +int > > +rte_mempool_populate_from_pg_sz_chunks(struct rte_mempool *mp) { > > + char mz_name[RTE_MEMZONE_NAMESIZE]; > > + size_t align, pg_sz, pg_shift; > > + const struct rte_memzone *mz; > > + unsigned int mz_id, n; > > + size_t min_chunk_size; > > + int ret; > > + > > + ret = mempool_ops_alloc_once(mp); > > + if (ret != 0) > > + return ret; > > + > > + if (mp->nb_mem_chunks != 0) > > + return -EEXIST; > > + > > + pg_sz = get_min_page_size(mp->socket_id); > > + pg_shift = rte_bsf32(pg_sz); > > + > > + for (mz_id = 0, n = mp->size; n > 0; mz_id++, n -= ret) { > > + > > + ret = rte_mempool_ops_calc_mem_size(mp, n, > > + pg_shift, &min_chunk_size, &align); > > + > > + if (ret < 0 || min_chunk_size > pg_sz) > > If min_chunk_size is greater than pg_sz, ret is 0 and function returns > success.
Ack, will fix it in next version. > > > + goto fail; > > + > > + ret = snprintf(mz_name, sizeof(mz_name), > > + RTE_MEMPOOL_MZ_FORMAT "_%d", mp->name, > mz_id); > > + if (ret < 0 || ret >= (int)sizeof(mz_name)) { > > + ret = -ENAMETOOLONG; > > + goto fail; > > + } > > + > > + mz = rte_memzone_reserve_aligned(mz_name, > min_chunk_size, > > + mp->socket_id, 0, align); > > + > > + if (mz == NULL) { > > + ret = -rte_errno; > > + goto fail; > > + } > > + > > + ret = rte_mempool_populate_iova(mp, mz->addr, > > + mz->iova, mz->len, > > + rte_mempool_memchunk_mz_free, > > + (void *)(uintptr_t)mz); > > + if (ret < 0) { > > + rte_memzone_free(mz); > > + goto fail; > > + } > > + } > > + > > + return mp->size; > > + > > +fail: > > + rte_mempool_free_memchunks(mp); > > + return ret; > > +} > > + > > /* Default function to populate the mempool: allocate memory in > memzones, > > * and populate them. Return the number of objects added, or a negative > > * value on error. > > diff --git a/lib/librte_mempool/rte_mempool.h > > b/lib/librte_mempool/rte_mempool.h > > index 8053f7a..3046e4f 100644 > > --- a/lib/librte_mempool/rte_mempool.h > > +++ b/lib/librte_mempool/rte_mempool.h > > @@ -1062,6 +1062,23 @@ rte_mempool_populate_virt(struct > rte_mempool *mp, char *addr, > > void *opaque); > > > > /** > > * @warning > * @b EXPERIMENTAL: this API may change without prior notice. > > is missing Ack > > > + * Add memory from page sized memzones for objects in the pool at > > +init > > + * > > + * This is the function used to populate the mempool with page > > +aligned and > > + * page sized memzone memory to avoid spreading object memory across > > +two pages > > + * and to ensure all mempool objects reside on the page memory. > > + * > > + * @param mp > > + * A pointer to the mempool structure. > > + * @return > > + * The number of objects added on success. > > + * On error, the chunk is not added in the memory list of the > > + * mempool and a negative errno is returned. > > + */ > > +__rte_experimental > > +int rte_mempool_populate_from_pg_sz_chunks(struct rte_mempool > *mp); > > + > > +/** > > * Add memory for objects in the pool at init > > * > > * This is the default function used by rte_mempool_create() to > > populate diff --git a/lib/librte_mempool/rte_mempool_version.map > > b/lib/librte_mempool/rte_mempool_version.map > > index 17cbca4..9a6fe65 100644 > > --- a/lib/librte_mempool/rte_mempool_version.map > > +++ b/lib/librte_mempool/rte_mempool_version.map > > @@ -57,4 +57,5 @@ EXPERIMENTAL { > > global: > > > > rte_mempool_ops_get_info; > > + rte_mempool_populate_from_pg_sz_chunks; > > };