Hello,

We have a 4.2.1 ACS, running on XenServer 6.2.0, we have a zone with a pool of 
3 host, yesterday 1 host crash and OS get corrupted. I think we lost that host 
and have to reinstall it, but the issue is that we had a couple of VM and VR 
running on that host. The failing host  was the master of the pool, so once it 
fails all the pool was disconnected, we change the master role and recover pool 
management from Xencenter and ACS, once we did it a VM moved to the remaining 
host, all VR and 1 VM kept stuck in failing host.

In DB we see the VR and VM running, even if the host was marked as down and 
maintenance. We changed the VR state to 'stopped' and change de "last host ID" 
and "Host ID" to a working host. Once we did it we were able to destroy the VR 
and recreate them with successful results, they came up on working host. If we 
change only the state, the VR couldn't be destroyed. Here we workaround with 
the 70% of the outage, BUT one VM remain stuck to the host, we change the 
state, the last host ID, but once we press start, it "runs" on the failing host 
and the VM appears as running even if it doesn't. Any suggestion to force the 
VM to start in a different host and remove it from the failing host? This is a 
critical VM, we hope somebody else could give us a hand.

Regards / Cordialmente,

Jaime O. Rojas S.
Technology Manager
[email protected]<mailto:[email protected]>
Mobile: +57 301-3382382
Office: +57-1-8766767 x215

Reply via email to