On Fri, Jul 14, 2017 at 12:34 AM, Nagarajan, Padhu (HPE Storage) <pa...@hpe.com> wrote: > Thanks Stefan. Couldn't get to this earlier. Did another run and took a diff > of /proc/interrupts before and after the run. It shows all the interrupts for > 'virtio7-req.0' going to CPU1. I guess that explains the "CPU1/KVM" vcpu > utilization on the host. > > 34: 147 666085 0 0 PCI-MSI-edge > virtio7-req.0 > > The only remaining question is the high CPU utilization of the vCPU threads > for this workload. Even when I run a light fio workload (queue depth of 1 > which gives 8K IOPS), the vCPU threads are close to 100% utilization. Why is > it high and does it have an impact on guest code that could be executing on > the same CPU ?
100% is high for 8K IOPS. I wonder what "perf top" shows on the host while the fio benchmark is running inside the guest. The cross-CPU interrupts you saw suggest you can get better performance by pinning vcpu and iothreads to host CPUs so that physical storage interrupts are handed by vcpu and iothreads on the same host CPU. See libvirt documentation for pinning vcpus and iothreads. Stefan