One more thing… we are running in a K8s environment. We see this running VPP in a docker container both on baremetal and a pod running in a vm image.
And interfaces are sriov, not sure if that is implied. Sent from Mail for Windows 10 From: Jeremy Brown Sent: Friday, July 10, 2020 1:58 PM To: Damjan Marion Cc: vpp-dev@lists.fd.io; Dany Gregoire Subject: RE: [vpp-dev] Vectors/node and packet size Absolutely… please see inline Sent from Mail for Windows 10 From: Damjan Marion Sent: Friday, July 10, 2020 1:11 PM Subject: Re: [vpp-dev] Vectors/node and packet size Can you provide a bit more details? pps rate? nic type? config? [Jeremy] 1. This happens at any PPS rate, we can see this from 100k to 5M pps 2. We see this on Niantic, Fortville and Mellanox 3. What type of config are you looking for.. if you are referring to ioscaling we see this with and without multi-threading, with and without multi-queue and and number of io cores or workers. also, as Dave suggested please clear counters after traffic is started, so stats doesn’t include startup time…. [Jeremy] Or test scripts will initially restart VPP all counters are zeroed.. All tests are identical runs… so any dead time should be similar for each run. We can see this looking at VTune, regardless.. not only do we see the vectors/node difference… but we have implemented a cpu monitoring (via clock cycles) for dpdk threads and we can actually see the difference in utilization that happens within vpp by just altering the byte size. If we increase the byte size, the cpu utilization increases. This is independent of the vpp stats, and corroborates the issue. However, looking at profiles of the running vpp with VTune does not show any difference in hotspots… meaning that regardless of the byte size, the same work in the same scale seem to be taking place… it’s a little strange. Thanks, Damjan On 29 Jun 2020, at 18:22, Jeremy Brown via lists.fd.io <bjeremy32=yahoo....@lists.fd.io> wrote: Greetings, This is my first post to the forum, so if this is not the right place for this post please let me know. I had a question on VPP performance. We are running two testcases, we limit it to single threaded and just using one core in order to reduce as many variables as we can. In the two testcases, the only thing that changes, is the size incoming packet to VPP. Using a 64 byte packet, we see a vectors/node of ~80. Simply changing that packet size to 1400 we see the same vectors/node fall down to ~2. This is regardless of pps… there seems to be a non-linear decrease in vectors/node with increasing packet size. I was wondering if anyone had noticed some similar behavior. 64- byte packets Thread 1 vpp_wk_0 (lcore 2) Time 98.9, average vectors/node 80.35, last 128 main loops 0.00 per node 0.00 vector rates in 1.2643e5, out 1.2643e5, drop 0.0000e0, punt 2.0228e-2 Name State Calls Vectors Suspends Clocks Vectors/Call VirtualFuncEthernet88/10/4-out active 90915 6249981 0 1.06e1 68.75 VirtualFuncEthernet88/10/4-tx active 90915 6249981 0 4.06e1 68.75 VirtualFuncEthernet88/11/5-out active 73270 6249981 0 9.27e0 85.30 VirtualFuncEthernet88/11/5-tx active 73270 6249981 0 4.05e1 85.30 arp-input active 2 2 0 3.51e4 1.00 dpdk-input polling 1166129337 12499964 0 1.38e4 .01 error-punt active 2 2 0 5.56e3 1.00 ethernet-input active 2 2 0 1.47e4 1.00 gtpu4-encap active 90914 6249980 0 1.01e2 68.75 gtpu4-input active 73270 6249981 0 7.29e1 85.30 interface-output active 2 2 0 2.20e3 1.00 ip4-input-no-checksum active 145570 12499962 0 2.22e1 85.87 ip4-load-balance active 90914 6249980 0 1.77e1 68.75 ip4-local active 73272 6249983 0 2.45e1 85.29 ip4-lookup active 218840 18749943 0 3.79e1 85.68 ip4-punt active 2 2 0 1.27e3 1.00 ip4-rewrite active 236482 18749940 0 2.75e1 79.29 ip4-udp-lookup active 73270 6249981 0 2.44e1 85.301400-byte packets Thread 1 vpp_wk_0 (lcore 2) Time 102.1, average vectors/node 2.37, last 128 main loops 0.00 per node 0.00 vector rates in 1.1841e5, out 1.1438e5, drop 4.0334e3, punt 1.9588e-2 Name State Calls Vectors Suspends Clocks Vectors/Call VirtualFuncEthernet88/10/4-out active 2815250 5838981 0 8.18e1 2.07 VirtualFuncEthernet88/10/4-tx active 2815250 5838981 0 1.25e2 2.07 VirtualFuncEthernet88/11/5-out active 2765634 5839804 0 8.42e1 2.11 VirtualFuncEthernet88/11/5-tx active 2765634 5839804 0 2.32e2 2.11 arp-input active 9 825 0 2.25e3 91.67 dpdk-input polling 1136982388 12089787 0 1.44e4 .01 error-drop active 397116 411823 0 1.37e2 1.04 error-punt active 2 2 0 5.58e3 1.00 ethernet-input active 9 825 0 7.42e1 91.67 gtpu4-encap active 2815249 5838980 0 2.21e2 2.07 gtpu4-input active 3161920 6249981 0 2.10e2 1.98 interface-output active 2 2 0 2.42e3 1.00 ip4-glean active 397109 411000 0 1.58e2 1.03 ip4-input-no-checksum active 3733176 12088962 0 1.09e2 3.24 ip4-load-balance active 2815249 5838980 0 1.07e2 2.07 ip4-local active 3161922 6249983 0 1.12e2 1.98 ip4-lookup active 6895096 18338943 0 1.52e2 2.66 ip4-punt active 2 2 0 2.03e3 1.00 ip4-rewrite active 6151314 17516940 0 9.56e1 2.85 ip4-udp-lookup active 3161920 6249981 0 8.69e1 1.98 Sent from Mail for Windows 10
-=-=-=-=-=-=-=-=-=-=-=- Links: You receive all messages sent to this group. View/Reply Online (#16934): https://lists.fd.io/g/vpp-dev/message/16934 Mute This Topic: https://lists.fd.io/mt/75236961/21656 Group Owner: vpp-dev+ow...@lists.fd.io Unsubscribe: https://lists.fd.io/g/vpp-dev/unsub [arch...@mail-archive.com] -=-=-=-=-=-=-=-=-=-=-=-