Am 11.04.25 um 13:04 schrieb Daniel Kral: > On 4/3/25 14:16, Fabian Grünbichler wrote: >> On March 25, 2025 4:12 pm, Daniel Kral wrote: >>> +sub check_services_count { >>> + my ($rules) = @_; >>> + >>> + my $conflicts = []; >>> + >>> + foreach_colocation_rule($rules, sub { >>> + my ($rule, $ruleid) = @_; >>> + >>> + push @$conflicts, $ruleid if (scalar(keys %{$rule->{services}}) >>> < 2); >>> + }); >>> + >>> + return $conflicts; >>> +} >> >> is this really an issue? a colocation rule with a single service is just >> a nop? there's currently no cleanup AFAICT if a resource is removed, but > > You're right, AFAICS those are a noop when selecting the service node. I > guess I was a little pedantic / overprotective here about which rules > make sense in general instead of what the algorithm does in the end. > > And good point about handling when resources are removed, adding that to > delete_service_from_config comes right on my TODO list for the v1! > >> if we add that part (we maybe should?) then one can easily end up in a >> situation where a rule temporarily contains a single or no service? > > Hm, yes, especially if we add pools/tags at a later point to select > services for the rule, then this could happen very easily. But as you > already mentioned, those two cases would be noops too. > > Nevertheless, should we drop this? I think it could benefit users in > identifying that some rules might not do something they wanted and give > them a reason why, i.e. there's only one service in there, but at the > same time it could be a little noisy if there are a lot of affected rules.
I'd still keep rules that end up with only one service around, but maybe (temporarily) disable them. And/or we could also add a special "no-effect" marker like the "conflict" one proposed in my other answer? Then a user could enable/make the rule effective again by adding a new service to it. >>> + >>> +=head3 check_positive_intransitivity($rules) >>> + >>> +Returns a list of conflicts caused by transitive positive colocation >>> rules >>> +defined in C<$rules>. >>> + >>> +Transitive positive colocation rules exist, if there are at least >>> two positive >>> +colocation rules with the same strictness, which put at least the >>> same two >>> +services in relation. This means, that these rules can be merged >>> together. >>> + >>> +If there are no conflicts, the returned list is empty. >> >> The terminology here is quit confusing - conflict meaning that two rules >> are "transitive" and thus mergeable (which is good, cause it makes >> things easier to handle?) is quite weird, as "conflict" is a rather >> negative term.. >> >> there's only a single call site in the same module, maybe we could just >> rename this into "find_mergeable_positive_ruleids", similar to the >> variable where the result is stored? > > Yeah, I was probably to keen on the `$conflict = check_something(...)` > pattern here, but it would be much more readable with a simpler name, > I'll change that for the v1! > > ----- > > Ad why: I'll also add some documentation about the rationale why this is > needed in the first place. > > The main reason was because the latter rule check > 'check_inner_consistency' depends on the fact that the positive > colocation rules have been merged already, as it assumes that each > positive colocation rule has all of the services in there, which are > positively colocated. If it weren't so, it wouldn't detect that the > following three rules are inconsistent with each other: > > colocation: stick-together1 > services vm:101,vm:104 > affinity together > strict 1 > > colocation: stick-together2 > services vm:104,vm:102 > affinity together > strict 1 > > colocation: keep-apart > services vm:101,vm:102,vm:103 > affinity separate > strict 1 > > This reduces the complexity of the logic a little in > 'check_inner_consistency' as there it doesn't have to handle this > special case as 'stick-together1' and 'stick-together2' are already > merged in to one and it is easily apparent that vm 101 and vm 102 cannot > be colocated and non-colocated at the same time. > > ----- > > Also, I was curious about how that would work out for the case where a > negative colocation rule was defined for three nodes with those rules > split into three rules (essentially a cycle dependence). This should in > theory have the same semantics as the above rule set: > > colocation: stick-together1 > services vm:101,vm:104 > affinity together > strict 1 > > colocation: stick-together2 > services vm:104,vm:102 > affinity together > strict 1 > > colocation: very-lonely-services1 > services vm:101,vm:102 > affinity separate > strict 1 > > colocation: very-lonely-services2 > services vm:102,vm:103 > affinity separate > strict 1 > > colocation: very-lonely-services3 > services vm:101,vm:103 > affinity separate > strict 1 > > Without the merge of positive rules, 'check_inner_consistency' would > again not detect the inconsistency here. But with the merge correctly > applied before checking the consistency, this would be resolved and the > effective rule set would be: I suppose the effective rule set would still also contain the two 'together' rules, or? > > colocation: very-lonely-services2 > services vm:102,vm:103 > affinity separate > strict 1 > > colocation: very-lonely-services3 > services vm:101,vm:103 > affinity separate > strict 1 > > It could be argued, that the negative colocation rules should be merged > in a similar manner here, as there's now a "effective" difference in the > semantics of the above rule sets, as the negative colocation rule > between vm 101 and vm 103 and vm 102 and vm 103 remains. > > What do you think? I don't think there's a particular need to also merge negative rules between services (when they form a complete graph). It won't make a difference if there are no conflicts with positive rules and in edge cases when there are conflicts (which usually gets caught while editing the rules), it's better to drop fewer rules, so not merging is an advantage. Or do you have a particular advantage in favor of merging in mind? _______________________________________________ pve-devel mailing list pve-devel@lists.proxmox.com https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel