Hi,

Thanks for the clarification

Regards,
Siddarth

On Tue, Feb 4, 2020 at 4:43 PM Burakov, Anatoly <anatoly.bura...@intel.com>
wrote:

> On 04-Feb-20 10:55 AM, siddarth rai wrote:
> > Hi Anatoly,
> >
> > I don't need a secondary process.
>
> I understand that you don't, however that doesn't negate the fact that
> the codepath expects that you do.
>
> >
> > I tried out Julien's suggestion and set the param 'RTE_MAX_MEM_MB' value
> > to 8192 (the original value was over 500K). This works as a cap.
> > The virtual size dropped down to less than 8G. So this seems to be
> > working for me.
> >
> > I have a few queries/concerns though.
> > Is it safe to reduce the RTE_MAX_MEM_MB to such a low value ? Can I
> > reduce it further ? What will be the impact of doing so ? Will it limit
> > the maximum size of mbuf pool which I create ?
>
> It depends on your use case. The maximum size of mempool is limited as
> is, the better question is where to place that limit. In my experience,
> testpmd mempools are typically around 400MB per socket, so an 8G upper
> limit should not interfere with testpmd very much. However, depending on
> what else is there and what kind of allocations you may do, it may have
> other effects.
>
> Currently, the size of each internal per-NUMA node, per-page size page
> table is dictated by three constraints: maximum amount of memory per
> page table (so that we don't attempt to reserve thousands of 1G pages),
> maximum number of pages per page table (so that we aren't left with a
> few hundred megabytes' worth of 2M pages), and total maximum amount of
> memory (which places an upper limit on the sum of all page tables'
> memory amounts).
>
> You have lowered the latter to 8G which means that, depending on your
> system configuration, you will have at most 2G to 4G per page table. It
> is not possible to limit it further (for example, skip reservation on
> certain nodes or certain page sizes). Whether it will have an effect on
> your actual workload will depend on your use case.
>
> >
> > Regards,
> > Siddarth
> >
> > On Tue, Feb 4, 2020 at 3:53 PM Burakov, Anatoly
> > <anatoly.bura...@intel.com <mailto:anatoly.bura...@intel.com>> wrote:
> >
> >     On 30-Jan-20 8:51 AM, David Marchand wrote:
> >      > On Thu, Jan 30, 2020 at 8:48 AM siddarth rai <sid...@gmail.com
> >     <mailto:sid...@gmail.com>> wrote:
> >      >> I have been using DPDK 19.08 and I notice the process VSZ is
> huge.
> >      >>
> >      >> I tried running the test PMD. It takes 64G VSZ and if I use the
> >      >> '--in-memory' option it takes up to 188G.
> >      >>
> >      >> Is there anyway to disable allocation of such huge VSZ in DPDK ?
> >      >
> >      > *Disclaimer* I don't know the arcanes of the mem subsystem.
> >      >
> >      > I suppose this is due to the memory allocator in dpdk that
> reserves
> >      > unused virtual space (for memory hotplug + multiprocess).
> >
> >     Yes, that's correct. In order to guarantee memory reservation
> >     succeeding
> >     at all times, we need to reserve all possible memory in advance.
> >     Otherwise we may end up in a situation where primary process has
> >     allocated a page, but the secondary can't map it because the address
> >     space is already occupied by something else.
> >
> >      >
> >      > If this is the case, maybe we could do something to enhance the
> >      > situation for applications that won't care about multiprocess.
> >      > Like inform dpdk that the application won't use multiprocess and
> skip
> >      > those reservations.
> >
> >     You're welcome to try this, but i assure you, avoiding these
> >     reservations is a lot of work, because you'd be adding a yet another
> >     path to an already overly complex allocator :)
> >
> >      >
> >      > Or another idea would be to limit those reservations to what is
> >     passed
> >      > via --socket-limit.
> >      >
> >      > Anatoly?
> >
> >     I have a patchset in the works that does this and was planning to
> >     submit
> >     it to 19.08, but things got in the way and it's still sitting there
> >     collecting bit rot. This may be reason enough to resurrect it and
> >     finish
> >     it up :)
> >
> >      >
> >      >
> >      >
> >      > --
> >      > David Marchand
> >      >
> >
> >
> >     --
> >     Thanks,
> >     Anatoly
> >
>
>
> --
> Thanks,
> Anatoly
>

Reply via email to