yes i already tried that and works. but i was thinking in a schema where al nodes have the network and scheduler services installed and i can switch them via custom heartbeat resource. Everything worked but that. Maybe i can update the contoller hostname on switch.
using a vip like dns entry and pointing that on the nova conf will change that behavior or.the controller will still resolving via hostname ? guess not regards On Jul 22, 2011 7:52 PM, "Vishvananda Ishaya" <vishvana...@gmail.com> wrote: > Generally the easiest is to give the new machine the same hostname. > > You can also update the references to the host in the db: > > update networks set host=newhostname where host=oldhostname > > Vish > > On Jul 22, 2011, at 12:39 PM, Leandro Reox wrote: > >> Hi all, >> >> Being working on controller/network node HA for a time know, but at this point im having an issue that maybe someone has faced before . >> When i switch the controller to an "spare" one, the computes nodes still searching for "network.$oldcontrollerhostname" . Is there a place where the hostname on the controller is stored ? maybe a field on the database ? . The instances stucks in "networking" status >> >> The entry on the nova-compute.log from the compute that is trying to spawn the instance is clear : >> >> DEBUG nova.rpc [-] Making asynchronous call on network.controller1 ... from (pid=4440) call /usr/lib/pymodules/python2.6/nova/rpc.py:350 >> >> Where "controller1" is the OLD controller/network node >> >> Any clues ? >> >> Regards >> _______________________________________________ >> Mailing list: https://launchpad.net/~openstack >> Post to : openstack@lists.launchpad.net >> Unsubscribe : https://launchpad.net/~openstack >> More help : https://help.launchpad.net/ListHelp >
_______________________________________________ Mailing list: https://launchpad.net/~openstack Post to : openstack@lists.launchpad.net Unsubscribe : https://launchpad.net/~openstack More help : https://help.launchpad.net/ListHelp