Hi Janne

Am 20.11.24 um 11:30 schrieb Janne Johansson:

This post seem to show that, except they have their root named "nvme"
and they split on rack and not dc, but that is not important.

https://unix.stackexchange.com/questions/781250/ceph-crush-rules-explanation-for-multiroom-racks-setup

This is indeed a good example, thanks.

Let me put some thoughts/questions here:


step choose firstn 2 type rack

This choses 2 racks out of all available racks. As there are 2 racks available, all are chosen.


step chooseleaf firstn 2 type host

For each selected rack from the previous step, 2 hosts are chosen. But as the action is "chooseleaf", in fact not the hosts are picked, but one random (?) OSD in each of the 2 selected hosts.

In the end we have 4 OSDs in 4 different hosts, 2 in each rack.

Is this understanding correct?


Shouldn't we note this one additionally:

        min_size 4
        max_size 4

Reason: If we wanted to place more ore less than 4 replicas, the rule won't work. Or what would happen if we don't specify min/max_size? Should lead to an error in case the pool is e.g. size=5, shouldn't it?


One last question: if we edit a crush map after a pool was created on it, what happens? In my understanding, this lead to massive data shifting so that the placements comply with the new rules. That right?

Thanks again

--
Andre Tann
_______________________________________________
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io

Reply via email to