On Wed, Aug 27, 2025 at 12:24:39PM +0100, Daniel P. Berrangé wrote: [...] > > > +static void *mshv_vcpu_thread(void *arg) > > +{ > > + CPUState *cpu = arg; > > + int ret; > > + > > + rcu_register_thread(); > > + > > + bql_lock(); > > + qemu_thread_get_self(cpu->thread); > > + cpu->thread_id = qemu_get_thread_id(); > > So every MSHV vCPU has a corresponding Linux thread, similar > to the model with KVM. In libvirt we rely on the vCPU thread > being controllable with all the normal Linux process related > APIs. For example, setting thread CPU affinity, setting NUMA > memory policy, setting scheduler priorities, putting threads > into cgroups and applying a wide variety of cgroup controls. > > Will there be any significant "gotchas" with the threads for > MSHV vCPUs, that would mean the above libvirt controls would > either raise errors, or silently not have any effect ? >
It depends on the scheduling model of the host. MSHV supports two scheduling models: hypervisor-based and root partition. Root partition is the term we use to describe the host VM -- think of it like the Dom0 VM in Xen. In the hypervisor-based scheduling model, the VCPUs are scheduled by the hypervisor. The root partition merely tells the hypervisor "this VCPU is ready to run", and the hypervisor decides when and where to actually run it. In this model, the VCPU threads, when scheduled, are shown as blocked. Libvirt controls over the threads won't fail but have no effect. The root partition scheduling model is where the root (Linux) can decide where and when to run the VCPUs. Everything you mentioned should work as expected. For the upcoming project, we are going to use the root scheduling model. Thanks, Wei P.S. In the hpyervsior-based scheduling mode, the hypervisor does allow us to set CPU affinity for VCPUs or group them (similar to cgroup but not the same) by making some hypercalls. We've been thinking about mapping those into libvirt controls, but haven't made good progress on that front. It deserves its own discussion.