On Tue, Jan 24, 2012 at 03:11:31PM +1100, Andrew Beekhof wrote: > On Tue, Jan 24, 2012 at 7:20 AM, Dejan Muhamedagic <deja...@fastmail.fm> > wrote: > > On Mon, Jan 23, 2012 at 08:55:02AM +1100, Andrew Beekhof wrote: > >> On Sat, Jan 21, 2012 at 12:18 AM, Dejan Muhamedagic <deja...@fastmail.fm> > >> wrote: > >> > On Fri, Jan 20, 2012 at 01:09:56PM +1100, Andrew Beekhof wrote: > >> >> On Thu, Jan 19, 2012 at 12:15 AM, Dejan Muhamedagic > >> >> <deja...@fastmail.fm> wrote: > >> >> > On Wed, Jan 18, 2012 at 06:58:20PM +1100, Andrew Beekhof wrote: > >> >> >> On Wed, Jan 18, 2012 at 6:00 AM, Dejan Muhamedagic > >> >> >> <deja...@fastmail.fm> wrote: > >> >> >> > Hello, > >> >> >> > > >> >> >> > On Tue, Jan 03, 2012 at 05:19:14PM +1100, Andrew Beekhof wrote: > >> >> >> >> Does anyone have an opinion on the following schema and example? > >> >> >> >> I'm not a huge fan of the index field, but nor am I of making it > >> >> >> >> sensitive to order (like groups). > >> >> >> > > >> >> >> > What is wrong with order in XML elements? It seems like a very > >> >> >> > clear way to express order to me. > >> >> >> > >> >> >> Because we end up with the same update issues as for groups. > >> >> > > >> >> > OK. > >> >> > > >> >> > [...] > >> >> > > >> >> >> > Is there a possibility to express > >> >> >> > fencing nodes simultaneously? > >> >> >> > >> >> >> No. Its regular boolean shortcut semantics. > >> >> > > >> >> > As digimer mentioned, it is one common use case, i.e. for hosts > >> >> > with multiple power supplies. So far, we recommended lights-out > >> >> > devices for such hardware configurations and if those are > >> >> > monitored and more or less reliable such a setup should be fine. > >> >> > It would still be good to have a way to express it if some day > >> >> > somebody actually implements it. I guess that the schema can be > >> >> > easily extended by adding a "simultaneous" attribute to the > >> >> > "fencing-rule" element. > >> >> > >> >> So in the example below, you'd want the ability to not just trigger > >> >> the 'disk' and 'network' devices, but the ability to trigger them at > >> >> the same time? > >> > > >> > Right. > >> > >> For any particular reason? Or just in case? > > > > For nodes with multiple PSU and without (supported) management > > board. > > That still doesn't explain why the 'off' commands would need to be > simultaneous though. > To turn the node off, both devices just need to turn the port off... > there's no requirement that this happens simultaneously.
OK, right. What I had in mind was actually the default reset action. > > I think that one of our APC stonith agents can turn more > > than one port off simultaneously. > > If they're for the same host and device, then you don't even need this. > Just specify two ports in the host_map. Cool. Didn't look into it. How would that work with say external/rackpdu (uses snmpset(8) to manage ports)? That agent can either use the names_oid to fetch ports by itself (in which case they must be named after nodes) or this: outlet_config (string): Configuration file. Other way to recognize outlet number by nodename. Configuration file. Other way to recognize outlet number by nodename. Configuration file which contains node_name=outlet_number strings. Example: server1=1 server2=2 Now, how does stonithd know which parameter to use to pass the outlet (port) number from the host_map list to the agent? I assume that the agent should have a matching API. Does this work only with RH fence agents? > If they're not for the same host, then they're not even covered by the > same fencing operation and will never be simultaneous. > > If they're for the same host but different devices, then at most > you'll get the commands sent in parallel, guaranteeing simultaneous is > near impossible. Yes, what I meant is almost simultaneous, i.e. that both ports are for a while turned "off" at the same time. I'm not sure how does it work in reality. For instance, how long does the reset command keep the power off on the outlet. So, it should be "simultanous enough" :) Cheers, Dejan > _______________________________________________ > Pacemaker mailing list: Pacemaker@oss.clusterlabs.org > http://oss.clusterlabs.org/mailman/listinfo/pacemaker > > Project Home: http://www.clusterlabs.org > Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf > Bugs: http://bugs.clusterlabs.org _______________________________________________ Pacemaker mailing list: Pacemaker@oss.clusterlabs.org http://oss.clusterlabs.org/mailman/listinfo/pacemaker Project Home: http://www.clusterlabs.org Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf Bugs: http://bugs.clusterlabs.org