On Sat, Jun 06, 2009 at 04:33:48PM +0200, Lars Marowsky-Bree wrote: > On 2009-06-06T10:59:44, Lars Ellenberg <lars.ellenb...@linbit.com> wrote: > > > > On join of a drbd_<UUID> group, you could see who else is there and > > > connect to them, and also figure out if people try to start on more than > > > 2 nodes etc. > > now since when do you want a dopd bypassing the crm? > > to ensure that would be the crm's job, no? > > I don't think of this as a bypass. OCFS2/DLM use similar mechanisms to > ensure their internal integrity as well. > > With drbd supporting active/active or active/passive, for example, the > CRM/RA can't reliably tell whether the number of activated nodes is > corrected (and this will get worse if >2 nodes ever are supported), > without resorting to parse drbd's configuration file, which is icky (and > relies on the configfile being identical on all nodes).
this was about "dopd replacement". how do we reduce the risk of going online with out-of-date, stale, data. so appart from being "off topic", it is not the RAs job to force a certain configuration. you configure master-max=2, and forget to allow two primaries in drbd.conf: promoting the second one will fail. so what. the other way around: allow two primaries in drbd.conf, but master-max=1 in the cib: no harm done. and that has nothing to do with "outdate peer". > And also this would reduce the amount of configuration necessary - ie, > if the IP addresses were inherited from the OpenAIS configuration. (By > default; of course this could be overridden.) > > Actually, how about storing the configuration of each drbd instance in > the instance's meta-data? > > With internal meta-data, for example, one could then simply say: "start > device XXX". I do not follow. what problem is it you are trying to solve? > If the meta-data then was distributed using OpenAIS (say, in a > checkpoint, quite easy to do I'm told ;-), on the second node, the > initial setup would be reduced to "drbdadm clone <drbd-id> > <local-device>" to distribute drbd.conf, use csync2. to get rid of drbd.conf, rewrite a drbd RA to only use drbdsetup, and pass all configuration in as instance parameters. I see absolutely no need to write yet an other distributed (based on openAIS or whatever) configuration and meta data and whatnot daemon, if we already have the cib. > There could be a start-or-clone command too (maybe even the default?) > which would do the right thing (either resync if a copy already existed > or do a full clone), easing recovery of failed nodes. drbd does that now. but maybe I again don't see which problem you are trying to solve? > And if the configuration is distributed using OpenAIS, doing a "drbdadm > configure change syncer-speed 10M" would immediately affect all nodes > w/o needing to manually modify drbd.conf everywhere. see above: http://oss.linbit.com/csync2 or use the cib. > > what we actually are doing right now is placing location constraints on > > the master role into the cib from the "fence-peer" handler, and removing > > them again from the "after-sync-target" handler. sort of works. > > Here I think we need a more extensive discussion. Why doesn't your > handler modify the master score instead, but add additional > constraints? generic question about master score and globally-unique=false. I don't think it can even work. but if they can work, how, and why, are master scores supposed to work? if _by definition_ the instances are not distinguishable, why would placing a master preference on drbd-xy:1 prevent drbd-xy:1 from being allocated on the "wrong" node accessing the "wrong" data? if we are "globally-unique=false" (and I really think drbd would fall into that category), then there is no difference whether I place the better score on drbd-xy:0 or drbd-xy:1. appart from _accidentally_ colocating drbd-xy:0 with the same (group of) hosts, "most of the time". what am I missing? -- : Lars Ellenberg : LINBIT | Your Way to High Availability : DRBD/HA support and consulting http://www.linbit.com DRBD® and LINBIT® are registered trademarks of LINBIT, Austria. _______________________________________________ Pacemaker mailing list Pacemaker@oss.clusterlabs.org http://oss.clusterlabs.org/mailman/listinfo/pacemaker