Hello everyone,

We had an unexpected problem on our infrastructure while we were
upgrading CloudStack from 2.2.13 to 4.1 which led us to stop
CloudStack before being able to finalize the update.We still have the
vrouters to update but have a production platform that is a bit messy.
We're trying to find out how to finish he upgrade of the vrouters and
restart CloudStack when the VMware cluster has "lived his life" in the
meantime and some VM are in different states that when CLoudStack was
stopped.

Here is the procedure we are testing in a preproduction environment,
any advice/comment would be greatly appreciated as this is a risky
operation to perform on our production environment:

* Removing all vRouters from vCenter

* Updating vRouters states to « Stopped » in CloudStack database :
mysql> update vm_instance set state = 'Stopped' where state <>
'Expunging' and type = 'DomainRouter';

* Get the list of all vms :
mysql> select id, instance_name, state from vm_instance where state <>
'Expunging' and type = 'User';

* Updating table vm_instance to set the correct states and hosts of
vms. Example of SQL query :
mysql> update vm_instance set state = 'Running', host_id = 17,
last_host_id = 17 where id = 43;

* Restarting CloudStack

* Restarting all networks with « cleanup » option via CloudStack API.
Example of HTTP request :
$ curl 
http://cloudstack:8096/?command=restartNetwork\&cleanup=true\&response=json\&id=227

This procedure seems to be working except for a little detail : we see
that we use more Management IP adresses than expected, like they are
not de-allocated, could it be a side effect of the procedure ?

If you see any problem in this procedure or any problem it might lead
us to, your help would be appreciated.

Best regards,
Guillaume

Reply via email to