On Nov 15, 2010, at 2:08 AM, Andrew Beekhof wrote: > Don't use init.d/drbd, use the ocf script that comes with the drbd packages
Well, that doesn't help with live migration, unfortunately. This is quote from /etc/xen/scripts/block-drbd # This script will not load the DRBD kernel module for you, nor will # it attach, detach, connect, or disconnect your resource. The init # script distributed with DRBD will do that for you. Make sure it is # started before attempting to start a DRBD-backed domU. > > On Thu, Nov 11, 2010 at 2:19 PM, Vadym Chepkov <vchep...@gmail.com> wrote: >> Hi, >> >> I posted a less elaborate version of this question to drbd mail-list, but, >> unfortunately, didn't get a reply, >> maybe audience of this list has more experience. >> I am trying to make xen live migration to work reliably, but wasn't >> successful so far. >> Here is the problem. >> In a cluster configuration I have two type of resources - file systems on >> drbd, with explicit drbd resources configuration and >> Xen resources with implicit, using drbd-xen block device helper. For the >> former everything works great, but the latter doesn't work quite well. >> In order for helper script to work, drbd module has to be loaded and >> underlying resources up. So I have to start init.d/drbd script. >> I can't make it an lsb cluster resource, because stop will be disastrous for >> file system resources. Enable it in startup sequence breaks >> /usr/lib/drbd/crm-unfence-peer.sh, because cluster stack is not completely >> up by the time drbd script finishes, and there is no way to configure only >> specific resources that need to be initialized. >> Also, I can't find a way fence Xen resource. I tried fence-peer >> "/usr/lib/drbd/crm-fence-peer.sh -i xen_vsvn", >> where xen_svn is the name of Xen primitive, but it doesn't work, >> so there is a danger of starting Xen VM on an out-of-date node. Then there >> is no way of monitoring underlying drbd resources too. >> I thought of adding underlying drbd resource explicitly in the cluster, but >> I can't figure out what would be the configuration >> for "this resource can be master on both nodes, but if just on one, it's >> fine too". >> allow-two-primaries has to be allowed for live migration and at the time of >> migration resources are primary on both nodes, but when migration finishes, >> it's again primary/slave. But if I configure drbd resource in the cluster >> with meta master-max="2" master-node-max="1", >> cluster insists on having them both primary all the time. >> Hope I didn't bore you to death and there is an elegant solution for >> this conundrum :) >> Thank you, >> Vadym >> >> >> _______________________________________________ >> Pacemaker mailing list: Pacemaker@oss.clusterlabs.org >> http://oss.clusterlabs.org/mailman/listinfo/pacemaker >> >> Project Home: http://www.clusterlabs.org >> Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf >> Bugs: >> http://developerbugs.linux-foundation.org/enter_bug.cgi?product=Pacemaker >> >> > > _______________________________________________ > Pacemaker mailing list: Pacemaker@oss.clusterlabs.org > http://oss.clusterlabs.org/mailman/listinfo/pacemaker > > Project Home: http://www.clusterlabs.org > Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf > Bugs: > http://developerbugs.linux-foundation.org/enter_bug.cgi?product=Pacemaker _______________________________________________ Pacemaker mailing list: Pacemaker@oss.clusterlabs.org http://oss.clusterlabs.org/mailman/listinfo/pacemaker Project Home: http://www.clusterlabs.org Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf Bugs: http://developerbugs.linux-foundation.org/enter_bug.cgi?product=Pacemaker