Thanks for the review, Fabian!

Sorry for the wait, I was more focused on testing other patch series which were already ready to merge for PVE 8.4 ;). But I'm going to be working on this again now, so that it will be ready for the next release or even before that :)

Thanks for the suggestions, I'll implement them shortly. I still have a few questions/discussion points below.

On 4/3/25 14:16, Fabian Grünbichler wrote:
On March 25, 2025 4:12 pm, Daniel Kral wrote:
Add the colocation rule plugin to allow users to specify inter-service
affinity constraints.

These colocation rules can either be positive (keeping services
together) or negative (keeping service separate). Their strictness can
also be specified as either a MUST or a SHOULD, where the first
specifies that any service the constraint cannot be applied for stays in
recovery, while the latter specifies that that any service the
constraint cannot be applied for is lifted from the constraint.

The initial implementation also implements four basic transformations,
where colocation rules with not enough services are dropped, transitive
positive colocation rules are merged, and inter-colocation rule
inconsistencies as well as colocation rule inconsistencies with respect
to the location constraints specified in HA groups are dropped.

a high level question: theres a lot of loops and sorts over rules,
services, groups here - granted that is all in memory, so should be
reasonably fast, do we have concerns here/should we look for further
optimization potential?

e.g. right now I count (coming in via canonicalize):

- check_services_count
-- sort of ruleids (foreach_colocation_rule)
-- loop over rules (foreach_colocation_rule)
--- keys on services of each rule
- loop over the results (should be empty)
- check_positive_intransitivity
-- sort of ruleids, 1x loop over rules (foreach_colocation_rule via 
split_colocation_rules)
-- loop over each unique pair of ruleids
--- is_disjoint on services of each pair (loop over service keys)
- loop over resulting ruleids (might be many!)
-- loop over mergeable rules for each merge target
--- loop over services of each mergeable rule
- check_inner_consistency
-- sort of ruleids, 1x loop over rules (foreach_colocation_rule via 
split_colocation_rules)
-- loop over positive rules
--- for every positive rule, loop over negative rules
---- for each pair of positive+negative rule, check service
intersections
- loop over resulting conflicts (should be empty)
- check_consistency_with_groups
-- sort of ruleids, 1x loop over rules (foreach_colocation_rule via 
split_colocation_rules)
-- loop over positive rules
--- loop over services
---- loop over nodes of service's group
-- loop over negative rules
--- loop over services
---- loop over nodes of service's group
- loop over resulting conflicts (should be empty)

possibly splitting the rules (instead of just the IDs) once and keeping
a list of sorted rule IDs we could save some overhead?

might not be worth it (yet) though, but something to keep in mind if the
rules are getting more complicated over time..

Thanks for the nice call graph!

I think it would be reasonable to do this already, especially to reduce the code duplication between canonicalize() and are_satisfiable() you already mentioned below.

I was thinking about something like $cmddef or another registry-type structure, which has an entry for each checking subroutine and also a handler for what to print/do for both canonicalize() as well as are_satisfiable(). Then those would have to only iterate over the list and call the subroutines.

For every checking subroutine, we could pass the whole of $rules, and a rule type-specific variable, e.g. [$positive_ids, $negative_ids] here, or as you already suggested below [$positive_rules, $negative_rules].

One small thing I haven't explicitly mentioned here before is that at least the check for mergeable positive colocation rules (`check_positive_intransitivity`) and the check for inconsistency between positive and negative colocation rules (`check_inner_consistency`) do depend on each other somewhat, so order for these rules stays important here as well as that modifications to $rules are written correctly before the next check handler is called.

I've written about an example why this is necessary in a comment below `check_positive_intransitivity` and will document this more clearly in the v1.

The only semi-blocker here is that check_consistency_with_groups(...) also needs access to $groups and $services, but for the time being we could just pass those two to every subroutine and ignore it, where it isn't needed.

Another approach could be to write any service group membership into $rules internally already and just work with the data from there, so that transitioning to replacing "HA Groups" with "HA Location Rules" could go more smoothly in a future major version, if we want to do this in the end. Or we'll already allow creating location rules explicitly, which are synchronized with HA groups.



Signed-off-by: Daniel Kral <d.k...@proxmox.com>
---
  debian/pve-ha-manager.install  |   1 +
  src/PVE/HA/Makefile            |   1 +
  src/PVE/HA/Rules/Colocation.pm | 391 +++++++++++++++++++++++++++++++++
  src/PVE/HA/Rules/Makefile      |   6 +
  src/PVE/HA/Tools.pm            |   6 +
  5 files changed, 405 insertions(+)
  create mode 100644 src/PVE/HA/Rules/Colocation.pm
  create mode 100644 src/PVE/HA/Rules/Makefile

diff --git a/debian/pve-ha-manager.install b/debian/pve-ha-manager.install
index 9bbd375..89f9144 100644
--- a/debian/pve-ha-manager.install
+++ b/debian/pve-ha-manager.install
@@ -33,6 +33,7 @@
  /usr/share/perl5/PVE/HA/Resources/PVECT.pm
  /usr/share/perl5/PVE/HA/Resources/PVEVM.pm
  /usr/share/perl5/PVE/HA/Rules.pm
+/usr/share/perl5/PVE/HA/Rules/Colocation.pm
  /usr/share/perl5/PVE/HA/Tools.pm
  /usr/share/perl5/PVE/HA/Usage.pm
  /usr/share/perl5/PVE/HA/Usage/Basic.pm
diff --git a/src/PVE/HA/Makefile b/src/PVE/HA/Makefile
index 489cbc0..e386cbf 100644
--- a/src/PVE/HA/Makefile
+++ b/src/PVE/HA/Makefile
@@ -8,6 +8,7 @@ install:
        install -d -m 0755 ${DESTDIR}${PERLDIR}/PVE/HA
        for i in ${SOURCES}; do install -D -m 0644 $$i 
${DESTDIR}${PERLDIR}/PVE/HA/$$i; done
        make -C Resources install
+       make -C Rules install
        make -C Usage install
        make -C Env install
diff --git a/src/PVE/HA/Rules/Colocation.pm b/src/PVE/HA/Rules/Colocation.pm
new file mode 100644
index 0000000..808d48e
--- /dev/null
+++ b/src/PVE/HA/Rules/Colocation.pm
@@ -0,0 +1,391 @@
+package PVE::HA::Rules::Colocation;
+
+use strict;
+use warnings;
+
+use Data::Dumper;

leftover dumper ;)

+
+use PVE::JSONSchema qw(get_standard_option);
+use PVE::HA::Tools;
+
+use base qw(PVE::HA::Rules);
+
+sub type {
+    return 'colocation';
+}
+
+sub properties {
+    return {
+       services => get_standard_option('pve-ha-resource-id-list'),
+       affinity => {
+           description => "Describes whether the services are supposed to be kept 
on separate"
+               . " nodes, or are supposed to be kept together on the same 
node.",
+           type => 'string',
+           enum => ['separate', 'together'],
+           optional => 0,
+       },
+       strict => {
+           description => "Describes whether the colocation rule is mandatory or 
optional.",
+           type => 'boolean',
+           optional => 0,
+       },
+    }
+}
+
+sub options {
+    return {
+       services => { optional => 0 },
+       strict => { optional => 0 },
+       affinity => { optional => 0 },
+       comment => { optional => 1 },
+    };
+};
+
+sub decode_value {
+    my ($class, $type, $key, $value) = @_;
+
+    if ($key eq 'services') {
+       my $res = {};
+
+       for my $service (PVE::Tools::split_list($value)) {
+           if (PVE::HA::Tools::pve_verify_ha_resource_id($service)) {
+               $res->{$service} = 1;
+           }
+       }
+
+       return $res;
+    }
+
+    return $value;
+}
+
+sub encode_value {
+    my ($class, $type, $key, $value) = @_;
+
+    if ($key eq 'services') {
+       PVE::HA::Tools::pve_verify_ha_resource_id($_) for (keys %$value);
+
+       return join(',', keys %$value);
+    }
+
+    return $value;
+}
+
+sub foreach_colocation_rule {
+    my ($rules, $func, $opts) = @_;
+
+    my $my_opts = { map { $_ => $opts->{$_} } keys %$opts };

why? if the caller doesn't want $opts to be modified, they could just
pass in a copy (or you could require it to be passed by value instead of
by reference?).

there's only a single caller that does (introduced by a later patch) and
that one constructs the hash reference right at the call site, so unless
I am missing something this seems a bit overkill..

Right, I didn't think about this clearly enough and this could very well be just a direct write to the passed hash here.

Will change that in the v1!


+    $my_opts->{type} = 'colocation';
+
+    PVE::HA::Rules::foreach_service_rule($rules, $func, $my_opts);
+}
+
+sub split_colocation_rules {
+    my ($rules) = @_;
+
+    my $positive_ruleids = [];
+    my $negative_ruleids = [];
+
+    foreach_colocation_rule($rules, sub {
+       my ($rule, $ruleid) = @_;
+
+       my $ruleid_set = $rule->{affinity} eq 'together' ? $positive_ruleids : 
$negative_ruleids;
+       push @$ruleid_set, $ruleid;
+    });
+
+    return ($positive_ruleids, $negative_ruleids);
+}
+
+=head3 check_service_count($rules)
+
+Returns a list of conflicts caused by colocation rules, which do not have
+enough services in them, defined in C<$rules>.
+
+If there are no conflicts, the returned list is empty.
+
+=cut
+
+sub check_services_count {
+    my ($rules) = @_;
+
+    my $conflicts = [];
+
+    foreach_colocation_rule($rules, sub {
+       my ($rule, $ruleid) = @_;
+
+       push @$conflicts, $ruleid if (scalar(keys %{$rule->{services}}) < 2);
+    });
+
+    return $conflicts;
+}

is this really an issue? a colocation rule with a single service is just
a nop? there's currently no cleanup AFAICT if a resource is removed, but

You're right, AFAICS those are a noop when selecting the service node. I guess I was a little pedantic / overprotective here about which rules make sense in general instead of what the algorithm does in the end.

And good point about handling when resources are removed, adding that to delete_service_from_config comes right on my TODO list for the v1!

if we add that part (we maybe should?) then one can easily end up in a
situation where a rule temporarily contains a single or no service?

Hm, yes, especially if we add pools/tags at a later point to select services for the rule, then this could happen very easily. But as you already mentioned, those two cases would be noops too.

Nevertheless, should we drop this? I think it could benefit users in identifying that some rules might not do something they wanted and give them a reason why, i.e. there's only one service in there, but at the same time it could be a little noisy if there are a lot of affected rules.


+
+=head3 check_positive_intransitivity($rules)
+
+Returns a list of conflicts caused by transitive positive colocation rules
+defined in C<$rules>.
+
+Transitive positive colocation rules exist, if there are at least two positive
+colocation rules with the same strictness, which put at least the same two
+services in relation. This means, that these rules can be merged together.
+
+If there are no conflicts, the returned list is empty.

The terminology here is quit confusing - conflict meaning that two rules
are "transitive" and thus mergeable (which is good, cause it makes
things easier to handle?) is quite weird, as "conflict" is a rather
negative term..

there's only a single call site in the same module, maybe we could just
rename this into "find_mergeable_positive_ruleids", similar to the
variable where the result is stored?

Yeah, I was probably to keen on the `$conflict = check_something(...)` pattern here, but it would be much more readable with a simpler name, I'll change that for the v1!

-----

Ad why: I'll also add some documentation about the rationale why this is needed in the first place.

The main reason was because the latter rule check 'check_inner_consistency' depends on the fact that the positive colocation rules have been merged already, as it assumes that each positive colocation rule has all of the services in there, which are positively colocated. If it weren't so, it wouldn't detect that the following three rules are inconsistent with each other:

colocation: stick-together1
    services vm:101,vm:104
    affinity together
    strict 1

colocation: stick-together2
    services vm:104,vm:102
    affinity together
    strict 1

colocation: keep-apart
    services vm:101,vm:102,vm:103
    affinity separate
    strict 1

This reduces the complexity of the logic a little in 'check_inner_consistency' as there it doesn't have to handle this special case as 'stick-together1' and 'stick-together2' are already merged in to one and it is easily apparent that vm 101 and vm 102 cannot be colocated and non-colocated at the same time.

-----

Also, I was curious about how that would work out for the case where a negative colocation rule was defined for three nodes with those rules split into three rules (essentially a cycle dependence). This should in theory have the same semantics as the above rule set:

colocation: stick-together1
    services vm:101,vm:104
    affinity together
    strict 1

colocation: stick-together2
    services vm:104,vm:102
    affinity together
    strict 1

colocation: very-lonely-services1
    services vm:101,vm:102
    affinity separate
    strict 1

colocation: very-lonely-services2
    services vm:102,vm:103
    affinity separate
    strict 1

colocation: very-lonely-services3
    services vm:101,vm:103
    affinity separate
    strict 1

Without the merge of positive rules, 'check_inner_consistency' would again not detect the inconsistency here. But with the merge correctly applied before checking the consistency, this would be resolved and the effective rule set would be:

colocation: very-lonely-services2
    services vm:102,vm:103
    affinity separate
    strict 1

colocation: very-lonely-services3
    services vm:101,vm:103
    affinity separate
    strict 1

It could be argued, that the negative colocation rules should be merged in a similar manner here, as there's now a "effective" difference in the semantics of the above rule sets, as the negative colocation rule between vm 101 and vm 103 and vm 102 and vm 103 remains.

What do you think?


+
+=cut
+
+sub check_positive_intransitivity {
+    my ($rules) = @_;
+
+    my $conflicts = {};
+    my ($positive_ruleids) = split_colocation_rules($rules);
+
+    while (my $outerid = shift(@$positive_ruleids)) {
+       my $outer = $rules->{ids}->{$outerid};
+
+       for my $innerid (@$positive_ruleids) {

so this is in practice a sort of "optimized" loop over all pairs of
rules - iterating over the positive rules twice, but skipping pairs that
were already visited by virtue of the shift on the outer loop..

might be worth a short note, together with the $inner and $outer
terminology I was a bit confused at first..

Sorry, I'll make that clearer in a comment above or with better naming of the variables!

The `while(shift ...)` was motivated by not having to prune duplicates afterwards and of course not having to check the same rules again, but lacks a little in readability here.


+           my $inner = $rules->{ids}->{$innerid};
+
+           next if $outerid eq $innerid;
+           next if $outer->{strict} != $inner->{strict};
+           next if PVE::HA::Tools::is_disjoint($outer->{services}, 
$inner->{services});
+
+           push @{$conflicts->{$outerid}}, $innerid;
+       }
+    }
+
+    return $conflicts;
+}
+
+=head3 check_inner_consistency($rules)
+
+Returns a list of conflicts caused by inconsistencies between positive and
+negative colocation rules defined in C<$rules>.
+
+Inner inconsistent colocation rules exist, if there are at least the same two
+services in a positive and a negative colocation relation, which is an
+impossible constraint as they are opposites of each other.
+
+If there are no conflicts, the returned list is empty.

here the conflicts and check terminology makes sense - we are checking
an invariant that must be satisfied after all :)

+
+=cut
+
+sub check_inner_consistency {

but 'inner' is a weird term since this is consistency between rules?

it basically checks that no pair of services should both be colocated
and not be colocated at the same time, but not sure how to encode that
concisely..

Hm right, 'intra' wouldn't make this any simpler. I'll come up with a better name for the next revision!


+    my ($rules) = @_;
+
+    my $conflicts = [];
+    my ($positive_ruleids, $negative_ruleids) = split_colocation_rules($rules);
+
+    for my $outerid (@$positive_ruleids) {
+       my $outer = $rules->{ids}->{$outerid}->{services};

s/outer/positive ?

ACK for this and all the following instances ;)


+
+       for my $innerid (@$negative_ruleids) {
+           my $inner = $rules->{ids}->{$innerid}->{services};

s/inner/negative ?

+
+           my $intersection = PVE::HA::Tools::intersect($outer, $inner);
+           next if scalar(keys %$intersection < 2);

the keys there is not needed, but the parentheses are in the wrong place
instead ;) it does work by accident though, because the result of keys
will be coerced to a scalar anyway, so you get the result of your
comparison wrapped by another call to scalar, so you end up with either
1 or '' depending on whether the check was true or false..

Oh, what a luck coincidence ;)! Thanks for catching that, I'll fix that!


+
+           push @$conflicts, [$outerid, $innerid];
+       }
+    }
+
+    return $conflicts;
+}
+
+=head3 check_positive_group_consistency(...)
+
+Returns a list of conflicts caused by inconsistencies between positive
+colocation rules defined in C<$rules> and node restrictions defined in
+C<$groups> and C<$service>.

services?

ACK.


+
+A positive colocation rule inconsistency with groups exists, if at least two
+services in a positive colocation rule are restricted to disjoint sets of
+nodes, i.e. they are in restricted HA groups, which have a disjoint set of
+nodes.
+
+If there are no conflicts, the returned list is empty.
+
+=cut
+
+sub check_positive_group_consistency {
+    my ($rules, $groups, $services, $positive_ruleids, $conflicts) = @_;

this could just get $positive_rules (filtered via grep) instead?

+
+    for my $ruleid (@$positive_ruleids) {
+       my $rule_services = $rules->{ids}->{$ruleid}->{services};

and this could be

while (my ($ruleid, $rule) = each %$positive_rules) {
   my $nodes;
   ..
}

Thanks for the suggestion here and above, will use that for the v1!


+       my $nodes;
+
+       for my $sid (keys %$rule_services) {
+           my $groupid = $services->{$sid}->{group};
+           return if !$groupid;

should this really be a return?

Oops, no that shouldn't be a return, but a next obviously. Forgot to change them back after I moved them back from a handler (since there's minimal duplicated code with the next subroutine). I'll change this and all the other instances for the v1.


+
+           my $group = $groups->{ids}->{$groupid};
+           return if !$group;
+           return if !$group->{restricted};

same here?

+
+           $nodes = { map { $_ => 1 } keys %{$group->{nodes}} } if 
!defined($nodes);

isn't $group->{nodes} already a hash set of the desired format? so this could be

$nodes = { $group->{nodes}->%* };

?

Right, yes it is!

I was still somewhat confused about what the ->%* operation exactly did or did not really know that it existed before, but I now I finally looked up about postfix dereferencing ;).


+           $nodes = PVE::HA::Tools::intersect($nodes, $group->{nodes});

could add a break here with the same condition as below?

Right for this and the same comment for `check_negative_group_consistency`, I'll definitely also add a comment above that to make it clear why. Thanks for the suggestion!


+       }
+
+       if (defined($nodes) && scalar keys %$nodes < 1) {
+           push @$conflicts, ['positive', $ruleid];
+       }
+    }
+}
+
+=head3 check_negative_group_consistency(...)
+
+Returns a list of conflicts caused by inconsistencies between negative
+colocation rules defined in C<$rules> and node restrictions defined in
+C<$groups> and C<$service>.
+
+A negative colocation rule inconsistency with groups exists, if at least two
+services in a negative colocation rule are restricted to less nodes in total
+than services in the rule, i.e. they are in restricted HA groups, where the
+union of all restricted node sets have less elements than restricted services.
+
+If there are no conflicts, the returned list is empty.
+
+=cut
+
+sub check_negative_group_consistency {
+    my ($rules, $groups, $services, $negative_ruleids, $conflicts) = @_;

same question here

+
+    for my $ruleid (@$negative_ruleids) {
+       my $rule_services = $rules->{ids}->{$ruleid}->{services};
+       my $restricted_services = 0;
+       my $restricted_nodes;
+
+       for my $sid (keys %$rule_services) {
+           my $groupid = $services->{$sid}->{group};
+           return if !$groupid;

same question as above ;)

+
+           my $group = $groups->{ids}->{$groupid};
+           return if !$group;
+           return if !$group->{restricted};

same here

+
+           $restricted_services++;
+
+           $restricted_nodes = {} if !defined($restricted_nodes);
+           $restricted_nodes = PVE::HA::Tools::union($restricted_nodes, 
$group->{nodes});

here as well - if restricted_services > restricted_nodes, haven't we
already found a violation of the invariant and should break even if
another service would then be added in the next iteration that can run
on 5 move new nodes..

Thanks for catching this, will do that as already said in the above comment!


+       }
+
+       if (defined($restricted_nodes)
+           && scalar keys %$restricted_nodes < $restricted_services) {
+           push @$conflicts, ['negative', $ruleid];
+       }
+    }
+}
+
+sub check_consistency_with_groups {
+    my ($rules, $groups, $services) = @_;
+
+    my $conflicts = [];
+    my ($positive_ruleids, $negative_ruleids) = split_colocation_rules($rules);
+
+    check_positive_group_consistency($rules, $groups, $services, 
$positive_ruleids, $conflicts);
+    check_negative_group_consistency($rules, $groups, $services, 
$negative_ruleids, $conflicts);
+
+    return $conflicts;
+}
+
+sub canonicalize {
+    my ($class, $rules, $groups, $services) = @_;

should this note that it will modify $rules in-place? this is only
called by PVE::HA::Rules::checked_config which also does not note that
and could be interpreted as "config is checked now" ;)

Yes, it should really be pointed out by checked_config, but it doesn't hurt at all to document it for both. checked_config could also have a better name.


+
+    my $illdefined_ruleids = check_services_count($rules);
+
+    for my $ruleid (@$illdefined_ruleids) {
+       print "Drop colocation rule '$ruleid', because it does not have enough 
services defined.\n";
+
+       delete $rules->{ids}->{$ruleid};
+    }
+
+    my $mergeable_positive_ruleids = check_positive_intransitivity($rules);
+
+    for my $outerid (sort keys %$mergeable_positive_ruleids) {
+       my $outer = $rules->{ids}->{$outerid};
+       my $innerids = $mergeable_positive_ruleids->{$outerid};
+
+       for my $innerid (@$innerids) {
+           my $inner = $rules->{ids}->{$innerid};
+
+           $outer->{services}->{$_} = 1 for (keys %{$inner->{services}});
+
+           print "Merge services of positive colocation rule '$innerid' into 
positive colocation"
+               . " rule '$outerid', because they share at least one 
service.\n";

this is a bit confusing because it modifies the rule while continuing to
refer to it using the old name afterwards.. should we merge them and
give them a new name?

Good call, I would go for just appending the name, but depending on how many rules that are affected this could get rather long... We could also just do some temporary name at each new merge, but this could harder to follow if there are more than two merge actions.

I think I'll prefer just appending it since it seems that Perl can handle hash keys of 2**31 "fine" anyway :P and hope for the better that there won't be too many affected rules for users so that the key doesn't grow that long. Or what do you think?


+
+           delete $rules->{ids}->{$innerid};
+       }
+    }
+
+    my $inner_conflicts = check_inner_consistency($rules);
+
+    for my $conflict (@$inner_conflicts) {
+       my ($positiveid, $negativeid) = @$conflict;
+
+       print "Drop positive colocation rule '$positiveid' and negative colocation 
rule"
+           . " '$negativeid', because they share two or more services.\n";
+
+       delete $rules->{ids}->{$positiveid};
+       delete $rules->{ids}->{$negativeid};
+    }
+
+    my $group_conflicts = check_consistency_with_groups($rules, $groups, 
$services);
+
+    for my $conflict (@$group_conflicts) {
+       my ($type, $ruleid) = @$conflict;
+
+       if ($type eq 'positive') {
+           print "Drop positive colocation rule '$ruleid', because two or more 
services are"
+               . " restricted to different nodes.\n";
+       } elsif ($type eq 'negative') {
+           print "Drop negative colocation rule '$ruleid', because two or more 
services are"
+               . " restricted to less nodes than services.\n";
+       } else {
+           die "Invalid group conflict type $type\n";
+       }
+
+       delete $rules->{ids}->{$ruleid};
+    }
+}
+
+# TODO This will be used to verify modifications to the rules config over the 
API
+sub are_satisfiable {

this is basically canonicalize, but
- without deleting rules
- without the transitivity check
- with slightly adapted messages

should they be combined so that we have roughly the same logic when
doing changes via the API and when loading the rules for operations?

That would be much better, yes, as it's easy to miss adding it to both and could become cumbersome if there are more checks needed in the future.

If there's nothing speaking against that, I would go for the structure I have mentioned in the first inline comment to improve this, so that the check routine and handlers for canonicalize() and are_satisfiable() are closer together.


+    my ($class, $rules, $groups, $services) = @_;
+
+    my $illdefined_ruleids = check_services_count($rules);
+
+    for my $ruleid (@$illdefined_ruleids) {
+       print "Colocation rule '$ruleid' does not have enough services 
defined.\n";
+    }
+
+    my $inner_conflicts = check_inner_consistency($rules);
+
+    for my $conflict (@$inner_conflicts) {
+       my ($positiveid, $negativeid) = @$conflict;
+
+       print "Positive colocation rule '$positiveid' is inconsistent with negative 
colocation rule"
+           . " '$negativeid', because they share two or more services between 
them.\n";
+    }
+
+    my $group_conflicts = check_consistency_with_groups($rules, $groups, 
$services);
+
+    for my $conflict (@$group_conflicts) {
+       my ($type, $ruleid) = @$conflict;
+
+       if ($type eq 'positive') {
+           print "Positive colocation rule '$ruleid' is unapplicable, because two 
or more services"
+               . " are restricted to different nodes.\n";
+       } elsif ($type eq 'negative') {
+           print "Negative colocation rule '$ruleid' is unapplicable, because two 
or more services"
+               . " are restricted to less nodes than services.\n";
+       } else {
+           die "Invalid group conflict type $type\n";
+       }
+    }
+
+    if (scalar(@$inner_conflicts) || scalar(@$group_conflicts)) {
+       return 0;
+    }
+
+    return 1;
+}
+
+1;
diff --git a/src/PVE/HA/Rules/Makefile b/src/PVE/HA/Rules/Makefile
new file mode 100644
index 0000000..8cb91ac
--- /dev/null
+++ b/src/PVE/HA/Rules/Makefile
@@ -0,0 +1,6 @@
+SOURCES=Colocation.pm
+
+.PHONY: install
+install:
+       install -d -m 0755 ${DESTDIR}${PERLDIR}/PVE/HA/Rules
+       for i in ${SOURCES}; do install -D -m 0644 $$i 
${DESTDIR}${PERLDIR}/PVE/HA/Rules/$$i; done
diff --git a/src/PVE/HA/Tools.pm b/src/PVE/HA/Tools.pm
index 35107c9..52251d7 100644
--- a/src/PVE/HA/Tools.pm
+++ b/src/PVE/HA/Tools.pm
@@ -46,6 +46,12 @@ 
PVE::JSONSchema::register_standard_option('pve-ha-resource-id', {
      type => 'string', format => 'pve-ha-resource-id',
  });
+PVE::JSONSchema::register_standard_option('pve-ha-resource-id-list', {
+    description => "List of HA resource IDs.",
+    typetext => "<type>:<name>{,<type>:<name>}*",
+    type => 'string', format => 'pve-ha-resource-id-list',
+});
+
  PVE::JSONSchema::register_format('pve-ha-resource-or-vm-id', 
\&pve_verify_ha_resource_or_vm_id);
  sub pve_verify_ha_resource_or_vm_id {
      my ($sid, $noerr) = @_;
--
2.39.5



_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel





_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel





_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel

Reply via email to