On Wed, Oct 30, 2019 at 10:44:04AM +0300, Andrew Rybchenko wrote:
> On 10/30/19 10:36 AM, Andrew Rybchenko wrote:
> > On 10/29/19 8:20 PM, Olivier Matz wrote:
> > > On Tue, Oct 29, 2019 at 01:25:10PM +0300, Andrew Rybchenko wrote:
> > > > On 10/28/19 5:01 PM, Olivier Matz wrote:
> > > > > The previous commit reduced the amount of required memory when
> > > > > populating the mempool with non iova-contiguous memory.
> > > > > 
> > > > > Since there is no big advantage to have a fully
> > > > > iova-contiguous mempool
> > > > > if it is not explicitly asked, remove this code, it simplifies the
> > > > > populate function.
> > > > > 
> > > > > Signed-off-by: Olivier Matz <olivier.m...@6wind.com>
> > > > One comment below, other than that
> > > > Reviewed-by: Andrew Rybchenko <arybche...@solarflare.com>
> > > > 
> > > > > ---
> > > > >    lib/librte_mempool/rte_mempool.c | 47
> > > > > ++++++--------------------------
> > > > >    1 file changed, 8 insertions(+), 39 deletions(-)
> > > > > 
> > > > > diff --git a/lib/librte_mempool/rte_mempool.c
> > > > > b/lib/librte_mempool/rte_mempool.c
> > > > > index 3275e48c9..5e1f202dc 100644
> > > > > --- a/lib/librte_mempool/rte_mempool.c
> > > > > +++ b/lib/librte_mempool/rte_mempool.c
> > > > [snip]
> > > > 
> > > > > @@ -531,36 +521,15 @@ rte_mempool_populate_default(struct
> > > > > rte_mempool *mp)
> > > > >                goto fail;
> > > > >            }
> > > > > -        flags = mz_flags;
> > > > > -
> > > > >            /* if we're trying to reserve contiguous memory,
> > > > > add appropriate
> > > > >             * memzone flag.
> > > > >             */
> > > > > -        if (try_iova_contig_mempool)
> > > > > -            flags |= RTE_MEMZONE_IOVA_CONTIG;
> > > > > +        if (min_chunk_size == (size_t)mem_size)
> > > > > +            mz_flags |= RTE_MEMZONE_IOVA_CONTIG;
> > > > >            mz = rte_memzone_reserve_aligned(mz_name, mem_size,
> > > > > -                mp->socket_id, flags, align);
> > > > > -
> > > > > -        /* if we were trying to allocate contiguous memory,
> > > > > failed and
> > > > > -         * minimum required contiguous chunk fits minimum
> > > > > page, adjust
> > > > > -         * memzone size to the page size, and try again.
> > > > > -         */
> > > > > -        if (mz == NULL && try_iova_contig_mempool &&
> > > > > -                min_chunk_size <= pg_sz) {
> > > > > -            try_iova_contig_mempool = false;
> > > > > -            flags &= ~RTE_MEMZONE_IOVA_CONTIG;
> > > > > -
> > > > > -            mem_size = rte_mempool_ops_calc_mem_size(mp, n,
> > > > > -                    pg_shift, &min_chunk_size, &align);
> > > > > -            if (mem_size < 0) {
> > > > > -                ret = mem_size;
> > > > > -                goto fail;
> > > > > -            }
> > > > > +                mp->socket_id, mz_flags, align);
> > > > > -            mz = rte_memzone_reserve_aligned(mz_name, mem_size,
> > > > > -                mp->socket_id, flags, align);
> > > > > -        }
> > > > >            /* don't try reserving with 0 size if we were
> > > > > asked to reserve
> > > > >             * IOVA-contiguous memory.
> > > > >             */
> > > > [snip]
> > > > 
> > > > > @@ -587,7 +556,7 @@ rte_mempool_populate_default(struct
> > > > > rte_mempool *mp)
> > > > >            else
> > > > >                iova = RTE_BAD_IOVA;
> > > > > -        if (try_iova_contig_mempool || pg_sz == 0)
> > > > > +        if (pg_sz == 0)
> > > > I think (mz_flags & RTE_MEMZONE_IOVA_CONTIG) is lost here.
> > > Do you mean we should do instead
> > >     if (pg_sz == 0 || (mz_flags & RTE_MEMZONE_IOVA_CONTIG))
> > >         populate_iova()
> > >     else
> > >         populate_virt()
> > > ?
> > 
> > Yes.
> > 
> > > I would say yes, but it should be removed in patch 5/5.
> > > At the end, we don't want objects to be across pages, even
> > > with RTE_MEMZONE_IOVA_CONTIG.
> 
> Thinking more about it I don't understand why. If we know that
> the memzone is IOVA contiguous, why we should not populate
> it this way. It does not prevent patch 5/5 to do the job.

You are right. The page-crossing check is done in the ops->populate,
so it is also done by populate_iova().

> > > > >                ret = rte_mempool_populate_iova(mp, mz->addr,
> > > > >                    iova, mz->len,
> > > > >                    rte_mempool_memchunk_mz_free,
> 

Reply via email to