> On 3 Sep 2020, at 18:24, Solis JR, M. (Mauricio) via lists.fd.io 
> <mauricio.solisjr=tno...@lists.fd.io> wrote:
> 
> [Edited Message Follows]
> 
> Are there any resources where I can read more about the VPP buffer 
> architecture?  

As Dave likes to say, “Use the force, read the source” :)
As I am mainly guilty for not having such document, ask here if you have any 
questions and i will try to answer.

> 
> @Dave
> In regards to identifying the number of total buffers, are you referring to 
> the vector size of vlib_buffer_pools_t in vlib_buffer_main?

Look at “show buffers" output

vpp# show buffers
Pool Name            Index NUMA  Size  Data Size  Total  Avail  Cached   Used
default-numa-0         0     0   2496     2048    16800  16800     0       0
default-numa-1         1     1   2496     2048    16800  16800     0       0

How to read this:

we have 2 buffer pools, each for one numa, total buffer size is 2496. so in one 
2M hugepage we can fit 840 buffers.
We cannot span single buffer across 2 pages so there is a bit of wasted space 
at the end of each page (512 bytes).
In total we have allocated 20 2MB pages on each numa, so 840*20 = 16800 buffers.

Buffer can be in 3 possible states: Avaliable, Cached and Used.
Avail buffers are free buffers in the global buffer pool.
Cached buffers are free buffers in per-thread cache. For performance reasons 
each worker thread have own free buffer cache.
Used buffers are buffers allocated by different parts of vpp, mainly they sit 
on RX queue, but also in some cases feature nodes can keep them allocated, like 
ip reassembly.

— 
Damjan
   

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.

View/Reply Online (#17330): https://lists.fd.io/g/vpp-dev/message/17330
Mute This Topic: https://lists.fd.io/mt/76605334/21656
Mute #vpp: https://lists.fd.io/g/fdio+vpp-dev/mutehashtag/vpp
Mute #vnet: https://lists.fd.io/g/fdio+vpp-dev/mutehashtag/vnet
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub  [arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-

Reply via email to