Since the new NVMes are meant to replace the existing SSDs, why don't you 
assign class "ssd" to the new NVMe OSDs? That way you don't need to change the 
existing OSDs nor the existing crush rule. And the new NVMe OSDs won't lose any 
performance, "ssd" or "nvme" is just a name.

When you deploy the new NVMe, you can chuck this under [osd] in their local 
ceph.conf: `osd_class_update_on_start = false` They should then come up with a 
blank class and you can set the class to ssd afterwards.


________________________________
From: ceph-users <ceph-users-boun...@lists.ceph.com> on behalf of Oliver 
Freyermuth <freyerm...@physik.uni-bonn.de>
Sent: Thursday, 19 July 2018 6:13:25 AM
To: ceph-users@lists.ceph.com
Cc: Peter Wienemann
Subject: [ceph-users] Crush Rules with multiple Device Classes

Dear Cephalopodians,

we use an SSD-only pool to store the metadata of our CephFS.
In the future, we will add a few NVMEs, and in the long-term view, replace the 
existing SSDs by NVMEs, too.

Thinking this through, I came up with three questions which I do not find 
answered in the docs (yet).

Currently, we use the following crush-rule:
--------------------------------------------
rule cephfs_metadata {
        id 1
        type replicated
        min_size 1
        max_size 10
        step take default class ssd
        step choose firstn 0 type osd
        step emit
}
--------------------------------------------
As you can see, this uses "class ssd".

Now my first question is:
1) Is there a way to specify "take default class (ssd or nvme)"?
   Then we could just do this for the migration period, and at some point 
remove "ssd".

If multi-device-class in a crush rule is not supported yet, the only workaround 
which comes to my mind right now is to issue:
  $ ceph osd crush set-device-class nvme <old_ssd_osd>
for all our old SSD-backed osds, and modify the crush rule to refer to class 
"nvme" straightaway.

This leads to my second question:
2) Since the OSD IDs do not change, Ceph should not move any data around by 
changing both the device classes of the OSDs and the device class in the crush 
rule - correct?

After this operation, adding NVMEs to our cluster should let them automatically 
join this crush rule, and once all SSDs are replaced with NVMEs,
the workaround is automatically gone.

As long as the SSDs are still there, some tunables might not fit well anymore 
out of the box, i.e. the "sleep" values for scrub and repair, though.

Here my third question:
3) Are the tunables used for NVME devices the same as for SSD devices?
   I do not find any NVME tunables here:
   http://docs.ceph.com/docs/master/rados/configuration/osd-config-ref/
   Only SSD, HDD and Hybrid are shown.

Cheers,
        Oliver

_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to