Hm, interesting. So using a 64bit PV Lucid guest, I could use "xm vcpu-
set 1" to change the vcpus from 2 to 1. This produced some worrying
lines in dmesg though:

[   48.208923] CPU0 attaching NULL sched-domain.
[   48.208940] CPU1 attaching NULL sched-domain.
[   48.270196] CPU0 attaching NULL sched-domain.
[   48.271423] Cannot set affinity for irq 6
[   48.271433] Broke affinity for irq 7
[   48.271439] Broke affinity for irq 8
[   48.271444] Broke affinity for irq 9
[   48.271450] Broke affinity for irq 10
[   48.271455] Broke affinity for irq 11
[   48.370172] SMP alternatives: switching to UP code

But at least the guest kept running. Now a "xm vcpu-set 2" would have no
visible effect on the guest. The vpcu number remained 1 but it did not
hang either. But I was able to online the second cpu from within the
guest. The kernel used was 2.6.32-41-server. That said, the recommended
non-ec2 kernel flavours for running in PV guests would be server for
64bit and generic-pae for 32bit (mostly because those had drivers for
virtual environments built-in instead of them being modules).

@Frediano, would using the server flavour make a difference for you? And
when would the hang occur, when adding or when removing a vcpu? And when
running tail -f on the syslog, does that show any other hints?

-- 
You received this bug notification because you are a member of Ubuntu
Bugs, which is subscribed to Ubuntu.
https://bugs.launchpad.net/bugs/1007002

Title:
  Xen VCPUs hotplug does not work

To manage notifications about this bug go to:
https://bugs.launchpad.net/ubuntu/+source/linux/+bug/1007002/+subscriptions

-- 
ubuntu-bugs mailing list
ubuntu-bugs@lists.ubuntu.com
https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs

Reply via email to