Hi Luca,
Not really sure why the kernel is slow to reply to ping. Maybe it has to do
with scheduling but it’s just guess work.
I’ve never tried hping. Let me see if I understand your scenario: while running
iperf you tried to hping the stack and you got no rst back? Anything
interesting in “
On Thu, May 10, 2018 at 7:28 PM, John Lo (loj) wrote:
> Hi Jon,
>
Hi John,
> This is not the right behavior.
>
I had that suspicion... :-)
I think it is caused by reuse of a static ARP entry in the IP4 neighbor
> pool with static bit still set. The code merely set the dynamic bit in the
Thanks Brian/Dave. This really helped in resolving the mystery.
I was surprised because in this vlib_main() latency pops up on v1804
release and not v1801 (using -p option). Anyways thanks.
-Nitin
On Wednesday 09 May 2018 08:49 PM, Dave Barach wrote:
+1, that’s exactly what you’re seeing… D.
Hi Prashant,
Hope you are doing fine.
Regarding your question, I am not able to see macswap plugin in current
master branch but I will try to explain wrt dpdk_plugin:
With respect to low level device each VPP device driver registers for
1) INPUT_NODE (For Rx) VLIB_REGISTER_NODE (This you alr
The underlying [c-code] vpp client API library supports one client connection.
It’s not conceptually difficult to support multiple connections, but it would
take a lot of typing and testing.
You can raise it as a feature request, but I wouldn’t plan on seeing it any
time soon.
D.
From: Peter
Hello,
Thank you for the pointers. Seems to be working although with few notes:
1) It is not possible to keep both connection open:
vpp1 = VPP(jsonfiles)
r1 = vpp1.connect('vpp1', chroot_prefix='vpp1')
print('VPP1 version', vpp1.api.show_version().version.decode().rstrip('\0x00'))
vpp2 = VPP(j
Florin,
A few more comments about latency.
Some number in ms in the table below:
This is ping and iperf3 concurrent. In case of VPP it is vppctl ping.
Kernel w/ load Kernel w/o load VPP w/ load VPP w/o load
Min. :0.1920 Min. :0.0610 Min. :0.0573 Min. :0.03480
1s
Peter,
> …however, are there any other options to full control 2+ instances of VPP via
> API (not vppctl)? PythonAPI for example [1].
Ole’s answer to the same question:
> r = vpp.connect('vpp1', chroot_prefix='name of shared address segment')
Cheers,
Justin