On 2010-09-13 01:56, Jai wrote: > Hi > > I'm new to pacemaker configurations and am trying to replace my old heartbeat > two node cluster setup using haresources to pacemaker/corosync. I have mostly > followed the instructions from > http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf. > > My configuration is for two drbd devices which are used as the filesystems > for a xen DomU so they both need to be on the same node and in master role > before the domU can be started. But I think I still have something wrong with > my config (configs are below) as I had some trouble getting my colocation and > order commands to work with master/slave resources. > > Could you please have a look at my configuration and tell me what's wrong > with it? > > The problem: > During testing most actions seemed to work as expected(shutdown active node, > failover of resources in correct order succeed), however pulling the power > from the active node did not produce the results expected, which where that > node alpha would take over resources. > > [...] > > Failed actions: > testDomU_start_0 (node=alpha, call=17, rc=1, status=complete): unknown > error > drbd0:1_monitor_10000 (node=alpha, call=31, rc=8, status=complete): master
There should error messages in the Pacemaker logs for both of these. > [...] > > I expected drbd fencing to add both these rules but only one was added, the > one for drbdTmp. > > location drbd-fence-by-handler-drbdRoot drbdRoot \ > rule $id="drbd-fence-by-handler-rule-drbdRoot" $role="Master" -inf: > #uname ne alpha > location drbd-fence-by-handler-drbdTmp drbdTmp \ > rule $id="drbd-fence-by-handler-rule-drbdTmp" $role="Master" -inf: > #uname ne alpha Well I see none of those in your crm configure show output. Are you sure that even one of them is being set? > colocation testDomU-with-drbdRoot inf: testDomU drbdRoot:Master > colocation testDomU-with-drbdTmp inf: testDomU drbdTmp:Master > colocation drbdTmp-with-drbdRoot inf: drbdRoot:Master drbdTmp:Master The third colocation is redundant, is it not? > order order-drbdRoot-before-testDomU inf: drbdRoot:promote testDomU:start > order order-drbdTmp-and-drbdRoot inf: drbdTmp:promote drbdRoot:promote > # cat /etc/xen/test.sxp > name = "testDomU" > memory = 3000 > bootloader = "/usr/bin/pygrub" > on_poweroff = "destroy" > on_reboot = "restart" > on_crash = "restart" > vfb = [ "type=vnc,vncunused=1" ] > disk = [ "phy:/dev/drbd0,xvda1,w", > "phy:/dev/drbd1,xvda2,w", > "phy:/dev/VGsys/LVswap1,xvda3,w", ] Eeeeek. That's ugly. But I guess if you never want to be able to migrate, it might actually work. Take a closer look at the kernel logs (for DRBD) and /var/log/messages (for Pacemaker). There should be something useful in there that will help you troubleshoot this issue. Cheers, Florian
signature.asc
Description: OpenPGP digital signature
_______________________________________________ Pacemaker mailing list: Pacemaker@oss.clusterlabs.org http://oss.clusterlabs.org/mailman/listinfo/pacemaker Project Home: http://www.clusterlabs.org Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf Bugs: http://developerbugs.linux-foundation.org/enter_bug.cgi?product=Pacemaker