Hi,

I haven't done this myself yet but you should be able to simply move the (virtual) disk to the new host and start the OSD, depending on the actual setup. If those are stand-alone OSDs (no separate DB/WAL) it shouldn't be too difficult [1]. If you're using ceph-volume you could run 'ceph-volume lvm trigger' in case the OSD doesn't restart. Although this would not reconstruct data it would start a rebalance process since the crushmap will change automatically, leading to misplaced objects.

Regards,
Eugen


[1] http://lists.ceph.com/pipermail/ceph-users-ceph.com/2018-March/025624.html

Zitat von huxia...@horebdata.cn:

Dear ceph folks,

I encoutered an interesting situation as follows: an old FC SAN is connected two ceph OSD nodes, and its LUNs are used as virtual OSDs. When one node fails, its LUN can be taken over by anther node. My question is, how to start up the OSD on the new node without reconstructing its data? In other words, is there a simple to move OSD from one node to another?

many thanks,

samuel





huxia...@horebdata.cn
_______________________________________________
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io


_______________________________________________
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io

Reply via email to