Hello,

I am trying to build up two-node cluster for providing HA Xen domU
environment. Part of the service should be support for live migrations
(basically used during standard maintenance not HA).

I am using debian etch with backports (hb2 package version 2.1.3).

I have created standalone resource (called win2) with Xen/ocf resource manager:

 <primitive id="win2" class="ocf" type="Xen" provider="heartbeat">
   <operations>
     <op name="monitor" interval="10s" timeout="60s" id="xen-op-01" 
start_delay="0" disabled="false" role="Started"/>
     <op name="stop" timeout="60s" id="xen-op-02" start_delay="0" 
disabled="false" role="Started"/>
   </operations>
   <instance_attributes id="win2_instance">
     <attributes>
       <nvpair id="xen-01" name="xmfile" value="/etc/xen/win2.cfg"/>
     </attributes>
   </instance_attributes>
   <meta_attributes id="win2_meta">
     <attributes>
       <nvpair id="win2_metaattr_target_role" name="target_role" 
value="stopped"/>
       <nvpair id="win2_metaattr_allow_migrate" name="allow_migrate" 
value="true"/>
     </attributes>
   </meta_attributes>
 </primitive>

This WORKS nicely - live migration is working:

Jan 15 18:39:45 s-70 Xen[24715]: [24740]: INFO: win2: Starting xm migrate to 
s-71
Jan 15 18:40:00 s-70 Xen[24715]: [24910]: ERROR: win2: xm migrate to s-71 
succeeded.

I want to provide virtual IP dor VNC access as well, so I have created
resource group, which just combine IPaddre/ocf with Xen/ocf:

 <group id="group_win2">
   <meta_attributes id="group_win2_meta_attrs">
     <attributes>
       <nvpair id="group_win2_metaattr_target_role" name="target_role" 
value="stopped"/>
       <nvpair id="group_win2_metaattr_ordered" name="ordered" value="true"/>
       <nvpair id="group_win2_metaattr_collocated" name="collocated" 
value="true"/>
     </attributes>
   </meta_attributes>
   <primitive id="win2_vncip" class="ocf" type="IPaddr" provider="heartbeat">
     <instance_attributes id="win2_vncip_instance_attrs">
       <attributes>
         <nvpair name="ip" value="192.168.20.74"/>
       </attributes>
     </instance_attributes>
   </primitive>

 <primitive id="win2" class="ocf" type="Xen" provider="heartbeat">
   <operations>
     <op name="monitor" interval="10s" timeout="60s" id="xen-op-01" 
start_delay="0" disabled="false" role="Started"/>
     <op name="stop" timeout="60s" id="xen-op-02" start_delay="0" 
disabled="false" role="Started"/>
   </operations>
   <instance_attributes id="win2_instance">
     <attributes>
       <nvpair id="xen-01" name="xmfile" value="/etc/xen/win2.cfg"/>
     </attributes>
   </instance_attributes>
   <meta_attributes id="win2_meta">
     <attributes>
       <nvpair id="win2_metaattr_target_role" name="target_role" 
value="stopped"/>
       <nvpair id="win2_metaattr_allow_migrate" name="allow_migrate" 
value="true"/>
     </attributes>
   </meta_attributes>
 </primitive>
 </group>

Unfortunately such setup doesn't work. Xen domain is always destroyed
and the just created on the other node. 

Problem is probably in the fact, that domU is running on both nodes
during migration (which is obvious). Is there any way how to temporarily
start two IPaddr services and then stop the old one after migration (Xen
needs to be able to bind the IP before domU starts)?

Thanks for any ideas,

                Antonin
_______________________________________________
Linux-HA mailing list
[email protected]
http://lists.linux-ha.org/mailman/listinfo/linux-ha
See also: http://linux-ha.org/ReportingProblems

Reply via email to