On Mon, 2011-04-04 at 21:31 +0200, Holger Teutsch wrote: > On Mon, 2011-04-04 at 15:24 +0200, Andrew Beekhof wrote: > > On Mon, Apr 4, 2011 at 2:43 PM, Holger Teutsch <holger.teut...@web.de> > > wrote: > > > On Mon, 2011-04-04 at 11:05 +0200, Andrew Beekhof wrote: > > >> On Sat, Mar 19, 2011 at 11:55 AM, Holger Teutsch <holger.teut...@web.de> > > >> wrote: > > >> > Hi Dejan, > > >> > > > >> > On Fri, 2011-03-18 at 14:24 +0100, Dejan Muhamedagic wrote: > > >> >> Hi, > > >> >> > > >> >> On Fri, Mar 18, 2011 at 12:21:40PM +0100, Holger Teutsch wrote: > > >> >> > Hi, > > >> >> > I would like to submit 2 patches of an initial implementation for > > >> >> > discussion. > > >> > .. > > >> >> > To recall: > > >> >> > > > >> >> > crm_resource --move resource > > >> >> > creates a "standby" rule that moves the resource off the currently > > >> >> > active node > > >> >> > > > >> >> > while > > >> >> > > > >> >> > crm_resource --move resource --node newnode > > >> >> > creates a "prefer" rule that moves the resource to the new node. > > >> >> > > > >> >> > When dealing with clones and masters the behavior was random as the > > >> >> > code > > >> >> > only considers the node where the first instance of the clone was > > >> >> > started. > > >> >> > > > >> >> > The new code behaves consistently for the master role of an m/s > > >> >> > resource. The options "--master" and "rsc:master" are somewhat > > >> >> > redundant > > >> >> > as a "slave" move is not supported. Currently it's more an > > >> >> > acknowledgement of the user. > > >> >> > > > >> >> > On the other hand it is desirable (and was requested several times > > >> >> > on > > >> >> > the ML) to stop a single resource instance of a clone or master on a > > >> >> > specific node. > > >> >> > > > >> >> > Should that be implemented by something like > > >> >> > > > >> >> > "crm_resource --move-off --resource myresource --node devel2" ? > > >> >> > > > >> >> > or should > > >> >> > > > >> >> > crm_resource refuse to work on clones > > >> >> > > > >> >> > and/or should moving the master role be the default for m/s > > >> >> > resources > > >> >> > and the "--master" option discarded ? > > >> >> > > >> >> I think that we also need to consider the case when clone-max is > > >> >> less than the number of nodes. If I understood correctly what you > > >> >> were saying. So, all of move slave and move master and move clone > > >> >> should be possible. > > >> >> > > >> > > > >> > I think the following use cases cover what can be done with such kind > > >> > of > > >> > interface: > > >> > > > >> > crm_resource --moveoff --resource myresource --node mynode > > >> > -> all resource variants: check whether active on mynode, then > > >> > create standby constraint > > >> > > > >> > crm_resource --move --resource myresource > > >> > -> primitive/group: convert to --moveoff --node `current_node` > > >> > -> clone/master: refused > > >> > > > >> > crm_resource --move --resource myresource --node mynode > > >> > -> primitive/group: create prefer constraint > > >> > -> clone/master: refused > > >> > > >> Not sure this needs to be refused. > > > > > > I see the problem that the node where the resource instance should be > > > moved off had to be specified as well to get predictable behavior. > > > > > > Consider a a 2 way clone on a 3 node cluster. > > > If the clone is active on A and B what should > > > > > > crm_resource --move --resource myClone --node C > > > > > > do ? > > > > I would expect it to create the +inf constraint for C but no > > contraint(s) for the current location(s) > > You are right. These are different and valid use cases. > > crm_resource --move --resource myClone --node C > -> I want an instance on C, regardless where it is moved off > > crm_resource --move-off --resource myClone --node C > -> I want the instance moved off C, regardless where it is moved on > > I tried them out with a reimplementation of the patch on a 3 node > cluster with a resource with clone-max=2. The behavior appears logical > (at least to me 8-) ). > > > > > > This would require an additional --from-node or similar. > > > > > >> Other than that the proposal looks sane. > > >> > > >> My first thought was to make --move behave like --move-off if the > > >> resource is a clone or /ms, but since the semantics are the exact > > >> opposite, that might introduce introduce more problems than it solves. > > > > > > That was my perception as well. > > > > > >> > > >> Does the original crm_resource patch implement this? > > > > > > No, I will submit an updated version later this week. > > > > > > - holger
Hi, I submit revised patches for review. Summarizing preceding discussions the following functionality is implemented: crm_resource --move-off --resource myresource --node mynode -> all resource variants: check whether active on mynode, then create standby constraint crm_resource --move --resource myresource -> primitive/group: convert to --move-off --node `current_node` -> clone/master: refused crm_resource --move --resource myresource --node mynode -> all resource variants: create prefer constraint crm_resource --move --resource myresource --master --node mynode -> master: check whether active as slave on mynode, then create prefer constraint for master role -> others: refused The patch shell_migrate.diff supports this in the shell. This stuff is agnostic of what crm_migrate really does. Regards Holger
diff -r b4f456380f60 doc/crm_cli.txt --- a/doc/crm_cli.txt Thu Mar 17 09:41:25 2011 +0100 +++ b/doc/crm_cli.txt Mon Apr 04 14:19:31 2011 +0200 @@ -818,10 +818,25 @@ running on the current node. Additionally, you may specify a lifetime for the constraint---once it expires, the location constraint will no longer be active. +For a master resource specify <rsc>:master to move the master role. Usage: ............... - migrate <rsc> [<node>] [<lifetime>] [force] + migrate <rsc>[:master] [<node>] [<lifetime>] [force] +............... + +[[cmdhelp_resource_migrateoff,migrate a resource off the specified node]] +==== `migrateoff` (`moveoff`) + +Migrate a resource (-instance for clones/masters) off the specified node. +The resource is migrated by creating a constraint which prevents it from +running on the specified node. Additionally, you may specify a +lifetime for the constraint---once it expires, the location +constraint will no longer be active. + +Usage: +............... + migrateoff <rsc> <node> [<lifetime>] [force] ............... [[cmdhelp_resource_unmigrate,unmigrate a resource to another node]] diff -r b4f456380f60 shell/modules/ui.py.in --- a/shell/modules/ui.py.in Thu Mar 17 09:41:25 2011 +0100 +++ b/shell/modules/ui.py.in Mon Apr 04 14:19:31 2011 +0200 @@ -739,6 +739,7 @@ rsc_showxml = "crm_resource -q -r '%s'" rsc_setrole = "crm_resource --meta -r '%s' -p target-role -v '%s'" rsc_migrate = "crm_resource -M -r '%s' %s" + rsc_migrateoff = "crm_resource --move-off -r '%s' %s" rsc_unmigrate = "crm_resource -U -r '%s'" rsc_cleanup = "crm_resource -C -r '%s' -H '%s'" rsc_cleanup_all = "crm_resource -C -r '%s'" @@ -777,6 +778,7 @@ self.cmd_table["manage"] = (self.manage,(1,1),0) self.cmd_table["unmanage"] = (self.unmanage,(1,1),0) self.cmd_table["migrate"] = (self.migrate,(1,4),0) + self.cmd_table["migrateoff"] = (self.migrateoff,(2,3),0) self.cmd_table["unmigrate"] = (self.unmigrate,(1,1),0) self.cmd_table["param"] = (self.param,(3,4),1) self.cmd_table["meta"] = (self.meta,(3,4),1) @@ -788,6 +790,7 @@ self.cmd_aliases.update({ "status": ("show","list",), "migrate": ("move",), + "migrateoff": ("moveoff",), "unmigrate": ("unmove",), }) setup_aliases(self) @@ -847,8 +850,16 @@ return False return set_deep_meta_attr("is-managed","false",rsc) def migrate(self,cmd,*args): - """usage: migrate <rsc> [<node>] [<lifetime>] [force]""" - rsc = args[0] + """usage: migrate <rsc>[:master] [<node>] [<lifetime>] [force]""" + elem = args[0].split(':') + rsc = elem[0] + master = False + if len(elem) > 1: + master = elem[1] + if master != "master": + common_error("%s is invalid, specify 'master'" % master) + return False + master = True if not is_name_sane(rsc): return False node = None @@ -888,7 +899,22 @@ opts = "%s --lifetime='%s'" % (opts,lifetime) if force or user_prefs.get_force(): opts = "%s --force" % opts + if master: + opts = "%s --master" % opts return ext_cmd(self.rsc_migrate % (rsc,opts)) == 0 + def migrateoff(self,cmd,*args): + """usage: migrateoff <rsc> <node> [<lifetime>]""" + rsc = args[0] + if not is_name_sane(rsc): + return False + node = args[1] + lifetime = None + if len(args) == 3: + lifetime = args[2] + opts = "--node='%s'" % node + if lifetime: + opts = "%s --lifetime='%s'" % (opts,lifetime) + return ext_cmd(self.rsc_migrateoff % (rsc,opts)) == 0 def unmigrate(self,cmd,rsc): "usage: unmigrate <rsc>" if not is_name_sane(rsc):
diff -r b4f456380f60 tools/crm_resource.c --- a/tools/crm_resource.c Thu Mar 17 09:41:25 2011 +0100 +++ b/tools/crm_resource.c Tue Apr 05 09:43:06 2011 +0200 @@ -52,7 +52,8 @@ const char *prop_id = NULL; const char *prop_set = NULL; char *move_lifetime = NULL; -char rsc_cmd = 'L'; +int move_master = 0; +int rsc_cmd = 'L'; char *our_pid = NULL; IPC_Channel *crmd_channel = NULL; char *xml_file = NULL; @@ -192,6 +193,33 @@ return 0; } +/* return role of resource on node */ +static int +role_on_node(resource_t *rsc, const char *node_uname) +{ + GListPtr lpc = NULL; + + if(rsc->variant > pe_native) { + /* recursively call down */ + GListPtr gIter = rsc->children; + for(; gIter != NULL; gIter = gIter->next) { + int role; + if((role = role_on_node(gIter->data, node_uname)) != RSC_ROLE_UNKNOWN) + return role; + } + return RSC_ROLE_UNKNOWN; + } + + for(lpc = rsc->running_on; lpc != NULL; lpc = lpc->next) { + node_t *node = (node_t*)lpc->data; + if(rsc->variant == pe_native + && safe_str_eq(node->details->uname, node_uname)) { + return rsc->role; + } + } + return RSC_ROLE_UNKNOWN; +} + #define cons_string(x) x?x:"NA" static void print_cts_constraints(pe_working_set_t *data_set) @@ -795,8 +823,9 @@ } static int -move_resource( +make_move_constraint( const char *rsc_id, + int move_master, const char *existing_node, const char *preferred_node, cib_t * cib_conn) { @@ -935,6 +964,10 @@ crm_xml_add(rule, XML_ATTR_ID, id); crm_free(id); + if(move_master) { + crm_xml_add(rule, XML_RULE_ATTR_ROLE, "Master"); + } + crm_xml_add(rule, XML_RULE_ATTR_SCORE, INFINITY_S); crm_xml_add(rule, XML_RULE_ATTR_BOOLEAN_OP, "and"); @@ -975,6 +1008,67 @@ } static int +move_off_resource(resource_t *rsc, const char *off_uname, cib_t *cib_conn) +{ + int rc = 0; + + if (off_uname == NULL) { + CMD_ERR("Error performing operation: " + "must specify a host name (--node) for move off of a resource %s\n", + rsc_id); + + } else if (role_on_node(rsc, off_uname) < RSC_ROLE_STARTED) { + CMD_ERR("Resource %s not moved:" + " not active on %s\n", rsc_id, off_uname); + } else if (move_master) { + CMD_ERR("Resource %s not moved:" + " specifying --master is not supported for move-off\n", rsc_id); + } else { + rc = make_move_constraint(rsc_id, 0, off_uname, NULL, cib_conn); + } + + return rc; +} + +static int +move_on_resource(resource_t *rsc, const char *on_uname, cib_t *cib_conn) +{ + int rc = 0; + int role = role_on_node(rsc, on_uname); + + if(rsc->variant < pe_master) { + if(move_master) { + CMD_ERR("Error performing operation: " + "--master is only supported for m/s resources\n"); + } else if(role >= RSC_ROLE_STARTED) { + CMD_ERR("Error performing operation: " + "%s is already active on %s\n", + rsc_id, on_uname); + } else { + rc = make_move_constraint(rsc_id, 0, NULL, on_uname, cib_conn); + } + + } else { /* pe_master */ + if(move_master && role == RSC_ROLE_MASTER) { + CMD_ERR("Error performing operation: " + "%s is already active on %s with role Master\n", + rsc_id, on_uname); + } else if(move_master && role != RSC_ROLE_SLAVE) { + CMD_ERR("Error performing operation: " + "%s is not active on %s as Slave, can't move Master role\n", + rsc_id, on_uname); + + } else { + /* zap a possibly existing master rule */ + make_move_constraint(rsc_id, 0, NULL, NULL, cib_conn); + rc = make_move_constraint(rsc_id, move_master, NULL, on_uname, cib_conn); + } + } + + return rc; +} + +static int list_resource_operations( const char *rsc_id, const char *host_uname, gboolean active, pe_working_set_t *data_set) { @@ -1093,6 +1187,10 @@ crm_free(prefix); } +/* (partially) out of single letter options */ +#define CMD_MOVE_ON ('M') /* must match corresponding short option */ +#define CMD_MOVE_OFF (256 + 'o') +#define OPT_MASTER (256 + 'm') static struct crm_option long_options[] = { /* Top-level Options */ {"help", 0, 0, '?', "\t\tThis text"}, @@ -1119,11 +1217,12 @@ {"delete-parameter",1, 0, 'd', "Delete the named parameter for a resource. See also -m, --meta"}, {"get-property", 1, 0, 'G', "Display the 'class', 'type' or 'provider' of a resource", 1}, {"set-property", 1, 0, 'S', "(Advanced) Set the class, type or provider of a resource", 1}, - {"move", 0, 0, 'M', - "\t\tMove a resource from its current location, optionally specifying a destination (-N) and/or a period for which it should take effect (-u)" + {"move", 0, 0, CMD_MOVE_ON, + "\t\tMove a resource from its current location, optionally specifying a role (--master), a destination (-N) and/or a period for which it should take effect (-u)" "\n\t\t\t\tIf -N is not specified, the cluster will force the resource to move by creating a rule for the current location and a score of -INFINITY" "\n\t\t\t\tNOTE: This will prevent the resource from running on this node until the constraint is removed with -U"}, - {"un-move", 0, 0, 'U', "\tRemove all constraints created by a move command"}, + {"move-off", 0, 0, CMD_MOVE_OFF, "\t\tMove a resource off the node specified by -N"}, + {"un-move", 0, 0, 'U', "\t\tRemove all constraints created by a move command"}, {"-spacer-", 1, 0, '-', "\nAdvanced Commands:"}, {"delete", 0, 0, 'D', "\t\tDelete a resource from the CIB"}, @@ -1137,6 +1236,7 @@ {"resource-type", 1, 0, 't', "Resource type (primitive, clone, group, ...)"}, {"parameter-value", 1, 0, 'v', "Value to use with -p, -g or -d"}, {"lifetime", 1, 0, 'u', "\tLifespan of migration constraints\n"}, + {"master", 0, 0, OPT_MASTER, "\t\tMaster role for migration constraints\n"}, {"meta", 0, 0, 'm', "\t\tModify a resource's configuration option rather than one which is passed to the resource agent script. For use with -p, -g, -d"}, {"utilization", 0, 0, 'z', "\tModify a resource's utilization attribute. For use with -p, -g, -d"}, {"set-name", 1, 0, 's', "\t(Advanced) ID of the instance_attributes object to change"}, @@ -1158,10 +1258,13 @@ {"-spacer-", 1, 0, '-', " crm_resource --list", pcmk_option_example}, {"-spacer-", 1, 0, '-', "Display the current location of 'myResource':", pcmk_option_paragraph}, {"-spacer-", 1, 0, '-', " crm_resource --resource myResource --locate", pcmk_option_example}, - {"-spacer-", 1, 0, '-', "Move 'myResource' to another machine:", pcmk_option_paragraph}, + {"-spacer-", 1, 0, '-', "Move primitive or group 'myResource' off the active node:", pcmk_option_paragraph}, {"-spacer-", 1, 0, '-', " crm_resource --resource myResource --move", pcmk_option_example}, + {"-spacer-", 1, 0, '-', "Move 'myResource' off the specified node:", pcmk_option_paragraph}, + {"-spacer-", 1, 0, '-', " crm_resource --resource myResource --move-off --node actNode", pcmk_option_example}, {"-spacer-", 1, 0, '-', "Move 'myResource' to a specific machine:", pcmk_option_paragraph}, {"-spacer-", 1, 0, '-', " crm_resource --resource myResource --move --node altNode", pcmk_option_example}, + {"-spacer-", 1, 0, '-', " crm_resource --resource myMsResource --master --move --node altNode", pcmk_option_example}, {"-spacer-", 1, 0, '-', "Allow (but not force) 'myResource' to move back to its original location:", pcmk_option_paragraph}, {"-spacer-", 1, 0, '-', " crm_resource --resource myResource --un-move", pcmk_option_example}, {"-spacer-", 1, 0, '-', "Tell the cluster that 'myResource' failed:", pcmk_option_paragraph}, @@ -1262,14 +1365,18 @@ case 'F': case 'C': case 'W': - case 'M': + case CMD_MOVE_ON: + case CMD_MOVE_OFF: case 'U': case 'O': case 'o': case 'A': case 'a': rsc_cmd = flag; - break; + break; + case OPT_MASTER: + move_master = 1; + break; case 'p': case 'g': case 'd': @@ -1482,61 +1589,60 @@ rc = cib_NOTEXISTS; goto bail; } - rc = move_resource(rsc_id, NULL, NULL, cib_conn); + rc = make_move_constraint(rsc_id, 0, NULL, NULL, cib_conn); - } else if(rsc_cmd == 'M') { - node_t *dest = NULL; - node_t *current = NULL; - const char *current_uname = NULL; + } else if(rsc_cmd == CMD_MOVE_ON || rsc_cmd == CMD_MOVE_OFF) { resource_t *rsc = pe_find_resource(data_set.resources, rsc_id); - if(rsc != NULL && rsc->running_on != NULL) { - current = rsc->running_on->data; - if(current != NULL) { - current_uname = current->details->uname; - } - } - if(host_uname != NULL) { - dest = pe_find_node(data_set.nodes, host_uname); - } - + /* some common error checking */ if(rsc == NULL) { CMD_ERR("Resource %s not moved:" " not found\n", rsc_id); + goto bail; + } - } else if(rsc->variant == pe_native - && g_list_length(rsc->running_on) > 1) { - CMD_ERR("Resource %s not moved:" - " active on multiple nodes\n", rsc_id); - - } else if(host_uname != NULL && dest == NULL) { - CMD_ERR("Error performing operation: " - "%s is not a known node\n", host_uname); - rc = cib_NOTEXISTS; + if(host_uname != NULL) { + if(pe_find_node(data_set.nodes, host_uname) == NULL) { + CMD_ERR("Error performing operation: " + "%s is not a known node\n", host_uname); + rc = cib_NOTEXISTS; + goto bail; + } + } - } else if(host_uname != NULL - && safe_str_eq(current_uname, host_uname)) { - CMD_ERR("Error performing operation: " - "%s is already active on %s\n", - rsc_id, host_uname); + switch(rsc_cmd) { + case CMD_MOVE_ON: + /* special case: move with no hostname is actually a move-off */ + if(host_uname == NULL) { + node_t *current = NULL; + if (rsc->variant > pe_group) { + CMD_ERR("Resource %s not moved:" + " move without hostname not supported for clones and masters, use --move-off\n", rsc_id); + goto bail; + } - } else if(current_uname != NULL - && (do_force || host_uname == NULL)) { - rc = move_resource(rsc_id, current_uname, - host_uname, cib_conn); + if(rsc->running_on != NULL) + current = rsc->running_on->data; - - } else if(host_uname != NULL) { - rc = move_resource( - rsc_id, NULL, host_uname, cib_conn); + if(current == NULL) { + CMD_ERR("Resource %s not moved:" + " not active on any node\n", rsc_id); + } else { + rc = move_off_resource(rsc, current->details->uname, cib_conn); + } - } else { - CMD_ERR("Resource %s not moved: " - "not-active and no preferred location" - " specified.\n", rsc_id); - rc = cib_missing; - } - + } else { + rc = move_on_resource(rsc, host_uname, cib_conn); + } + break; + + case CMD_MOVE_OFF: + rc = move_off_resource(rsc, host_uname, cib_conn); + break; + + } + + } else if(rsc_cmd == 'G') { if(rsc_id == NULL) { CMD_ERR("Must supply a resource id with -r\n");
_______________________________________________ Pacemaker mailing list: Pacemaker@oss.clusterlabs.org http://oss.clusterlabs.org/mailman/listinfo/pacemaker Project Home: http://www.clusterlabs.org Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf Bugs: http://developerbugs.linux-foundation.org/enter_bug.cgi?product=Pacemaker