Yep, just change the CRUSH rule:

  ceph osd pool set my_cephfs_metadata_pool crush_rule replicated_nvme

If you have a rule set called replicated_nvme, that'll set it on the pool named my_cephfs_metadata_pool.

Of course this will cause a significant data movement.

If you need to add the rule, and want it to use host as a failure domain:

ceph osd crush rule create-replicated replicated_nvme default host nvme

On 2022-02-18 15:26, Vladimir Brik wrote:
Hello

Is it possible to change which device class a replicated pool is using?

For example, if my cephfs metadata pool was configured on creation to
use ssd device class, can I later change it to use nvme device class?


Thanks,

Vlad
_______________________________________________
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
_______________________________________________
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io

Reply via email to