Hi Amaresh,
Since I was the author of the change I should probably chime in :)
The change that you mention was aimed to simplify vppctl and remove its
dependency on the VPP libraries - there are a few use cases where the vppctl
would be in a different container - thus not having to install ext
Hi Ben,
Thank you for your response.
Actually in one of the usecase I was trying to execute few vpp command from
host using vppctl and not really want to go to vpp terminal with telnet
and execute the command. I see it was working earlier and the below check
in change the behavior. I am not sure
The memory traces track the memory allocated by backtrace, ie the functions
call chain that leads to the allocation.
Traceback is the backtrace similar to what you get in a debugger.
Bytes is the total memory currently allocated by this specific backtrace.
Count is the number of active allocations
Below is the logic I am now using to solve this problem.
Run a python script to convert all the CPU cores to comma separated.
For example, take the below startup.conf:
*apiVersion: v1kind: ConfigMapmetadata: name: vpp-startup-confdata:
startup.conf: | unix
>
> That's a broad question... Which kind of info are you looking for?
>
I think you kind answered one the gaps I have filled referencing linux
macvlans, and how the packets are forwarded. However when I run `ip -br a` I
dont see another interface created for the RDMA (maybe I shouldn't be see
Not sure if that is what you are looking for, but in that case, I just use
plain telnet to connect to VPP, as "telnet localhost 5002".
No need for vppctl in that case.
ben
> -Original Message-
> From: vpp-dev@lists.fd.io On Behalf Of Amaresh
> Parida
> Sent: Saturday, July 16, 2022 4:32
> Thanks for the info. I am not too familiar with rdma interfaces, well at
> least the way vpp is utilizing them. It seems its polling the interfaces
> just like dpdk.
Polling is the default for both dpdk and rdma interfaces. You can set them to
interrupt mode or adaptive mode (automatically swit