--- Begin Message ---
>>I currently implemented the colocation rules to put a constraint on 
>>which nodes the manager can select from for the to-be-migrated
>>service.

>>So if users use the static load scheduler (and the basic / service
>>count 
>>scheduler for that matter too), the colocation rules just make sure
>>that 
>>no recovery node is selected, which contradicts the colocation rules.
>>So 
>>the TOPSIS algorithm isn't changed at all.

Ah ok, got it, so it's an hard constraint (MUST) filtering the target
nodes.


>>There are two things that should/could be changed in the future
(besides 
>>the many future ideas that I pointed out already), which are

>>- (1) the schedulers will still consider all online nodes, i.e. even 
>>though HA groups and/or colocation rules restrict the allowed nodes
>>in 
>>the end, the calculation is done for all nodes which could be 
>>significant for larger clusters, and

>>- (2) the service (generally) are currently recovered one-by-one in a
>>best-fit fashion, i.e. there's no order on the service's needed 
>>resources, etc. There could be some edge cases (e.g. think about a 
>>failing node with a bunch of service to be kept together; these
>>should 
>>now be migrated to the same node, if possible, or put them on the 
>>minimum amount of nodes), where the algorithm could find better 
>>solutions if it either orders the to-be-recovered services, and/or
>>the 
>>utilization scheduler has knowledge about the 'keep together' 
>>colocations and considers these (and all subsets) as a single
service.
>>
>>For the latter, the complexity explodes a bit and is harder to test
>>for, 
>>which is why I've gone for the current implementation, as it also 
>>reduces the burden on users to think about what could happen with a 
>>specific set of rules and already allows the notion of MUST/SHOULD.
>>This 
>>gives enough flexibility to improve the decision making of the
>>scheduler 
>>in the future.

yes, soft constraint (SHOULD) is not so easy indeed.
I remember to have done some tests, putting in the topsis the number of
conflicting constraint by vm  for each host, and migrate vm with the
more constraint first. 
I had not too bad results, but this need to be tested at scale. 

Hard constraint is already a good step. (should work for 90% of people
without 10000 constraints mixed together )


On 4/1/25 03:50, DERUMIER, Alexandre wrote:
> Small feature request from students && customers:  they are a lot
> asking to be able to use vm tags in the colocation/affinity

>>Good idea! We were thinking about this too and I forgot to add it to
>>the 
>>list, thanks for bringing it up again!

Ye>>s, the idea would be to make pools and tags available as selectors
>>for 
>>rules here, so that the changes can be made rather dynamic by just 
>>adding a tag to a service.

could be perfect :)

>>The only thing we have to consider here is that HA rules have some 
>>verification phase and invalid rules will be dropped or modified to
>>make 
>>them applicable. Also these external changes must be identified
>>somehow 
>>in the HA stack, as I want to keep the amount of runs through the 
>>verification code to a minimum, i.e. only when the configuration is 
>>changed by the user. But that will be a discussion for another series
>>;).

yes sure!


BTW, another improvement could be hard constraint on storage
availability, as currently the HA stack is moving the vm blinding, 
try to start, then move the vm to another node if storage is available.
The only workaround is to create HA server group, but this could be a
improvment.

Same for the number of cores available on host.  (host number of cores
must be > than vm cores )


I'll try to take time to follow && test your patches !

Alexandre



--- End Message ---
_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel

Reply via email to