Hi Kasuma,
Applications that attach to the session layer (through the session api) can
read/write data, i.e., byte stream or datagrams, that are delivered from/to the
underlying transports (tcp, udp, quic, tls) via specific session layer apis or
POSIX-like apis, if they use the vcl library.
Just to add:-
One other option, we considered, was to make it an external app, and transfer
the packets via memif interface to the application and in the return path use
VCL to give it back to VPP.
But then also we will add additional overhead in the send and receive path.
Looking forward to yo
Hi Florin,
We are trying to achieve the mechanism, something similar to TAP interface, in
VPP.
So, the packets coming out of the TAP interface, will be directed directly to
the application. The application will receive the packets, coming via TAP
interface, process them and send it down via th
Elias,
I have cherry-picked the patch to stable/2001 and will merge it once it
has been verified +1.
Thanks for letting us know you're moving to 20.01.
-daw-
On 3/11/2020 9:41 AM, Elias Rudberg wrote:
Hello again,
Thanks for the help with getting this fix into the 1908 branch!
Could the sa
Hi Florin,
I wanted to avoid using session api.
I meant host stack is session apis.
Is there any method to send the data to internal apps?
Regards,
Kusuma
On Wed, 11 Mar, 2020, 11:56 PM Florin Coras, wrote:
> Hi Kusuma,
>
> Not sure I understand the question. You want to deliver data to inter
Hi Kusuma,
Not sure I understand the question. You want to deliver data to internal
applications (supposedly using the session layer) without going through nodes
like ip-local?
If not, what do you mean by “sending through host stack”?
Regards,
Florin
> On Mar 11, 2020, at 11:16 AM, Kusuma D
Hi,
Is there any method to send packets directly to internal apps without
sending through Host stack?
Regards,
Kusuma
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#15730): https://lists.fd.io/g/vpp-dev/message/15730
Mute This Topic: https://list
Hi,
Has anyone benchmarked the impact of VPP API invocations on the forwarding
performance?
Background: most calls on the VPP API run in a stop-world maner. That means
all graph node worker threads are stopped at a barrier, the API call is
executed and then the workers are released from the barri
Are you running VPP with worker threads and using interrupt mode in memif?
can you capture “sh int rx-placement” on both sides?
—
Damjan
> On 11 Mar 2020, at 15:44, vyshakh krishnan wrote:
>
> Hi All,
>
> When we try to ping back to back connected memif interface, its taking around
> 20 mi
Hi All,
When we try to ping back to back connected memif interface, its taking
around 20 milli secs:
vpp1 (10.1.1.1) (10.1.1.2) vpp2
DBGvpp# ping 10.1.1.2
116 bytes from 10.1.1.2: icmp_seq=1 ttl=64 time=15.1229 ms
116 bytes from 10.1.1.2: icmp_seq=2 ttl=64 time=20.1475 ms
116 bytes from 10
Coverity run failed today.
Current number of outstanding issues are 2
Newly detected: 0
Eliminated: 0
More details can be found at
https://scan.coverity.com/projects/fd-io-vpp/view_defects
-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#15726): ht
Hello again,
Thanks for the help with getting this fix into the 1908 branch!
Could the same fix please be added in the stable/2001 branch also?
That would be very helpful for us, since although we are until now
using 19.08 we are about to move to 20.01 because we need the NAT
improvements in 20.
src/vppinfra/fheap.[ch] date from 2011, and are unused as far as I can tell.
Nothing you might discover about them would surprise me. Caveat emptor.
HTH... Dave
From: vpp-dev@lists.fd.io on behalf of
xiapengli...@gmail.com
Sent: Tuesday, March 10, 2020 11:26 A
Hi Damjan,
It is intentionally done this way as we don’t want people to assign rx
> queues to threads which are not worker threads. I will prefer that this
> stays as it is…
>
Oh, good point, I get it. Is there any recommended way to calculate
worker_id from show_thread API output?
typedef stru
14 matches
Mail list logo