Assuming yes, that it should be run while test is going on, here are the results. Reinstalled VPP on DUT today because perf was not available for my kernel (4.4.0-64-generic) and now the performance is up from 1.7Mpps to 2.1Mpps.
tc01-64B-1t1c-eth-l2bdbasemaclrn-eth-1vhost-1vm-ndrdisc :: [Cfg] D... | PASS | FINAL_RATE: 2140272.45312 pps (2x 1070136.22656 pps) FINAL_BANDWIDTH: 1.4382630885 Gbps (untagged) LATENCY usec [min/avg/max] LAT_100%NDR: ['2/31/336', '2/30/327'] LAT_50%NDR: ['2/16/278', '2/15/276'] LAT_10%NDR: ['2/10/518', '2/10/496'] root@hp-att-2:/home/testuser/csit# sudo perf stat -e 'kvm:*' -a sleep 1h ^Csleep: Interrupt ^C Performance counter stats for 'system wide': 1,411,858 kvm:kvm_entry (100.00%) 0 kvm:kvm_hypercall (100.00%) 0 kvm:kvm_hv_hypercall (100.00%) 209,439 kvm:kvm_pio (100.00%) 0 kvm:kvm_fast_mmio (100.00%) 24,200 kvm:kvm_cpuid (100.00%) 2,934 kvm:kvm_apic (100.00%) 1,411,394 kvm:kvm_exit (100.00%) 84 kvm:kvm_inj_virq (100.00%) 6 kvm:kvm_inj_exception (100.00%) 875 kvm:kvm_page_fault (100.00%) 503,938 kvm:kvm_msr (100.00%) 7,841 kvm:kvm_cr (100.00%) 75,645 kvm:kvm_pic_set_irq (100.00%) 1,262 kvm:kvm_apic_ipi (100.00%) 501,412 kvm:kvm_apic_accept_irq (100.00%) 231 kvm:kvm_eoi (100.00%) 0 kvm:kvm_pv_eoi (100.00%) 0 kvm:kvm_nested_vmrun (100.00%) 0 kvm:kvm_nested_intercepts (100.00%) 0 kvm:kvm_nested_vmexit (100.00%) 0 kvm:kvm_nested_vmexit_inject (100.00%) 0 kvm:kvm_nested_intr_vmexit (100.00%) 0 kvm:kvm_invlpga (100.00%) 0 kvm:kvm_skinit (100.00%) 209,593 kvm:kvm_emulate_insn (100.00%) 36,560 kvm:vcpu_match_mmio (100.00%) 0 kvm:kvm_write_tsc_offset (100.00%) 0 kvm:kvm_update_master_clock (100.00%) 0 kvm:kvm_track_tsc (100.00%) 0 kvm:kvm_pml_full (100.00%) 7,113 kvm:kvm_ple_window (100.00%) 0 kvm:kvm_pvclock_update (100.00%) 498,826 kvm:kvm_wait_lapic_expire (100.00%) 0 kvm:kvm_enter_smm (100.00%) 0 kvm:kvm_pi_irte_update (100.00%) 225,871 kvm:kvm_userspace_exit (100.00%) 4,539 kvm:kvm_vcpu_wakeup (100.00%) 75,652 kvm:kvm_set_irq (100.00%) 75,645 kvm:kvm_ioapic_set_irq (100.00%) 0 kvm:kvm_ioapic_delayed_eoi_inj (100.00%) 0 kvm:kvm_msi_set_irq (100.00%) 276 kvm:kvm_ack_irq (100.00%) 64,174 kvm:kvm_mmio (100.00%) 458,785 kvm:kvm_fpu (100.00%) 0 kvm:kvm_age_page (100.00%) 0 kvm:kvm_try_async_get_page (100.00%) 0 kvm:kvm_async_pf_doublefault (100.00%) 0 kvm:kvm_async_pf_not_present (100.00%) 0 kvm:kvm_async_pf_ready (100.00%) 0 kvm:kvm_async_pf_completed (100.00%) 885 kvm:kvm_halt_poll_ns 272.194267416 seconds time elapsed On 3/6/17, 12:50 PM, "Sean Chandler (sechandl)" <secha...@cisco.com> wrote: Hi Ray, The VM name is csit-nested-1.6.img but nested VMs to me usually means a VM in a VM. In this case a VM is running testpmd only. The command below should be run while the test is running I assume? -s On 3/6/17, 5:51 AM, "Kinsella, Ray" <ray.kinse...@intel.com> wrote: Hi Sean, Since I guess all this is running a VM (right?), it would be good to understand what KVM stat dump looks like. For the HP+Dell compared to UCS/SuperMicro. sudo ./perf stat -e 'kvm:*' -a sleep 1h Are we using nested virtualization? Ray K On 03/03/2017 15:14, Sean Chandler (sechandl) wrote: > Hi Folks, > > For several months we have been fighting performance issues on both HP and Dell systems. The scenario is l2bd and one vhost so a traffic generator and a single DUT running vpp. > > Various CPU jitter mitigation methods have been tried and integrated into CSIT but the performance for one core on thread one queue is half of what automated testing has shown on UCS or super micro. > > Who would be the best person or persons to help investigate the situation? What info can I provide to help debug. I've been working with the CSIT folks but we have run dry of ideas. > > Thanks in advance! > -s > _______________________________________________ > vpp-dev mailing list > vpp-dev@lists.fd.io > https://lists.fd.io/mailman/listinfo/vpp-dev > _______________________________________________ vpp-dev mailing list vpp-dev@lists.fd.io https://lists.fd.io/mailman/listinfo/vpp-dev