Hi Anatoly,

On Fri, Jul 19, 2019 at 03:03:29PM +0100, Burakov, Anatoly wrote:
> On 19-Jul-19 2:38 PM, Olivier Matz wrote:
> > When using iova contiguous memory and objets smaller than page size,
> > ensure that objects are not located across several pages.
> > 
> > Signed-off-by: Vamsi Krishna Attunuru <vattun...@marvell.com>
> > Signed-off-by: Olivier Matz <olivier.m...@6wind.com>
> > ---
> >   lib/librte_mempool/rte_mempool_ops_default.c | 39 
> > ++++++++++++++++++++++++++--
> >   1 file changed, 37 insertions(+), 2 deletions(-)
> > 
> > diff --git a/lib/librte_mempool/rte_mempool_ops_default.c 
> > b/lib/librte_mempool/rte_mempool_ops_default.c
> > index 4e2bfc82d..2bbd67367 100644
> > --- a/lib/librte_mempool/rte_mempool_ops_default.c
> > +++ b/lib/librte_mempool/rte_mempool_ops_default.c
> > @@ -45,19 +45,54 @@ rte_mempool_op_calc_mem_size_default(const struct 
> > rte_mempool *mp,
> >     return mem_size;
> >   }
> > +/* Returns -1 if object falls on a page boundary, else returns 0 */
> > +static inline int
> > +mempool_check_obj_bounds(void *obj, uint64_t pg_sz, size_t elt_sz)
> > +{
> > +   uintptr_t page_end, elt_addr = (uintptr_t)obj;
> > +   uint32_t pg_shift;
> > +   uint64_t page_mask;
> > +
> > +   if (pg_sz == 0)
> > +           return 0;
> > +   if (elt_sz > pg_sz)
> > +           return 0;
> > +
> > +   pg_shift = rte_bsf32(pg_sz);
> > +   page_mask =  ~((1ull << pg_shift) - 1);
> > +   page_end = (elt_addr & page_mask) + pg_sz;
> 
> This looks like RTE_PTR_ALIGN should do this without the magic? E.g.
> 
> page_end = RTE_PTR_ALIGN(elt_addr, pg_sz)
> 
> would that not be equivalent?

Yes, I simplified this part in the new version, thanks.

Olivier

Reply via email to