It is a bit of shame that that plugin doesn’t scale. Somebody will need to
rewrite that plugin to make it right, i.e simple use of sub-interfaces will
likely make this limitation to dissapear...
—
Damjan
> On Feb 4, 2019, at 5:56 AM, Abeeha Aqeel
> wrote:
>
>
> I am using the vpp pppoe p
Hi Hongjun,
Integrating this work with Sweetcomb would be interesting because stats may be
"enriched" with extra information which not exposed to stats shared memory
segment.
Because of Chinese new year, there won't be a weekly call on Thursday but may
Yohann & Stevan could attend to next call.
Hi Yohan & Stevan,
Great work. Thanks!
Jerome
Le 02/02/2019 23:35, « vpp-dev@lists.fd.io au nom de Yohan Pipereau »
a écrit :
Hi everyone,
Stevan and I have developed a small gRPC server to stream VPP metrics to
an analytic stack.
That's right, there is already a prog
I am using the vpp pppoe plugin and that’s how its working. I do see an option
in the vnet/interface.c to create interfaces that do not need TX nodes, but I
am not sure how to use that.
Also I can not figure out where the nodes created along with the pppoe sessions
are being used as they do n
> On 3 Feb 2019, at 20:13, Saxena, Nitin wrote:
>
> Hi Damjan,
>
> See function octeontx_fpa_bufpool_alloc() called by octeontx_fpa_dequeue().
> Its a single read instruction to get the pointer of data.
Yeah saw that, and today vpp buffer manager can grab up to 16 buffer indices
with one in
Hi Damjan,
See function octeontx_fpa_bufpool_alloc() called by octeontx_fpa_dequeue(). Its
a single read instruction to get the pointer of data.
Similarly, octeontx_fpa_bufpool_free() is also a single write instruction.
So, If you are able to prove with numbers that current software solution is
> On 3 Feb 2019, at 18:38, Nitin Saxena wrote:
>
> Hi Damjan,
>
>> Which exact operation do they accelerate?
> There are many…basic features are…
> - they accelerate fast buffer free and alloc. Single instruction required for
> both operations.
I quickly looked into DPDK octeontx_fpavf_dequ
Hi Damjan,
Which exact operation do they accelerate?
There are many…basic features are…
- they accelerate fast buffer free and alloc. Single instruction required for
both operations.
- Free list is maintained by hardware and not software.
Further other co-processors are dependent on buffer being
Hi Damjan,
I have few queries regarding this patch.
- DPDK mempools are not used anymore, we register custom mempool ops, and dpdk
is taking buffers from VPP
Some of the targets uses hardware memory allocator like OCTEONTx family and
NXP's dpaa. Those hardware allocators are exposed as dpdk
> On 3 Feb 2019, at 16:58, Nitin Saxena wrote:
>
> Hi Damjan,
>
> I have few queries regarding this patch.
>
> - DPDK mempools are not used anymore, we register custom mempool ops, and
> dpdk is taking buffers from VPP
> Some of the targets uses hardware memory allocator like OCTEONTx famil
Hi there,
> Breaking this out into its own thread.
>
>
> Currently when creating a new dynamic NAT session, the source IP and source
> port are considered. If I've understood this right, the next time the user
> (source IP) sends traffic matching the previous traffic (source IP + source
> po
Hello,
Breaking this out into its own thread.
Currently when creating a new dynamic NAT session, the source IP and source
port are considered. If I've understood this right, the next time the user
(source IP) sends traffic matching the previous traffic (source IP + source
port), the same
12 matches
Mail list logo