Hello, From what I see in the show run output, you do not process any traffic. Only a few packets got through, and you can see the vector size for each node is 1 (as we only processed very few packets, coming 1 by 1 at low rate). In that case, it is expected that the cost per packet (Clocks) will be high, as VPP cannot not amortized the cost of each node on several packets. Regarding the CPU utilization (100% by each worker) it is expected too as you are running in poll-mode: the memif driver is using all the CPU watching for new packets. Check the same stats with traffic, I'd expect the cost per packet (Clocks) to decrease sharply (and the vector size to increase). If you want to avoid burning cycles for nothing in absence of traffic, I'd suggest you try the adaptive mode: ~# vppctl set int rx-mode <interface> adaptive
Best ben > -----Original Message----- > From: vpp-dev@lists.fd.io <vpp-dev@lists.fd.io> On Behalf Of > mojtaba.eshghi > Sent: jeudi 1 août 2019 10:49 > To: vpp-dev@lists.fd.io > Subject: [vpp-dev] VPP (running inside LXC containers) showing very high > clock-cycle numbers #vpp > > [Edited Message Follows] > > Hi guys, > > I am running an experiment using VPP inside LXC containers. The output of > Clocks column in VPP's "show run" is very high! When I measure the clock- > cycles that a node is running, it confirms the high clock-cycle usage. My > Current setup is a server with 12 isolated cores (1-12), and the VPPs > which are inside containers are pinned to isolated cores, meaning that no > other process can run on those cores. The rx-mode of all interfaces is set > to polling, and there is sufficient number of worker (isolated) cores > allocated to each VPP instance. > > * I am using vpp v19.08-rc0~554-g1671d3b build on my machine. I tried the > same thing with VPP v18 and even installing different VPP versions via > apt. > * CPU scaling governance is set to "Performance". I also tried the whole > experiment with the CPU scaling disabled and a constant CPU frequency. > > The output of the "show run" command is: > > > The output of htop is: > > > As you see the isolated cores 2-12 are pinned to the VPP threads. Can > anyone tell me why do I see such big numbers? Any hint may be useful. > > Thanks!
-=-=-=-=-=-=-=-=-=-=-=- Links: You receive all messages sent to this group. View/Reply Online (#13640): https://lists.fd.io/g/vpp-dev/message/13640 Mute This Topic: https://lists.fd.io/mt/32676862/21656 Mute #vpp: https://lists.fd.io/mk?hashtag=vpp&subid=1480452 Group Owner: vpp-dev+ow...@lists.fd.io Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub [arch...@mail-archive.com] -=-=-=-=-=-=-=-=-=-=-=-