On 21/02/17 17:20, Jan Beulich wrote: > >>>> The final 8 bits are the initial legacy APIC ID. For HVM guests, this was >>>> overridden to vcpu_id * 2. The same logic is now applied to PV guests, so >>>> guests don't observe a constant number on all vcpus via their emulated or >>>> faulted view. >>> They won't be the same everywhere, but every 128th CPU will >>> share values. I'm therefore not sure it wouldn't be better to hand >>> out zero or all ones here. >> There is no case where 128 cpus work sensibly under Xen ATM. > For HVM you mean. I'm sure I've seen > 128 vCPU PV guests > (namely Dom0-s).
You can physically create PV domains with up to 8192 vcpus. I tried this once. The NMI watchdog (even set to 10s) is unforgiving of some the for_each_vcpu() loops during domain destruction. I can also still create workloads in a 64vcpu HVM guest which will cause a 5 second watchdog timeout, which is why XenServers upper supported vcpu limit is still 32. ~Andrew _______________________________________________ Xen-devel mailing list Xen-devel@lists.xen.org https://lists.xen.org/xen-devel