Hello,

We have a cluster with 26 nodes, and 15 nodes have a bad batch of 2 nvme wheree 
we have for each 6 lv for db/wal.  We have to change it, because they fail one 
by one...
The defective nvme are M2 samsung enterprise.
When they fail, we got sense errors, and the nvme disappear, if we power of the 
server, and power on, it come back... If we just do a soft reboot, the nvme 
don’t come back...
So we have decided to replace all of them by Intel pcie SSDPEDME016T4S
The original one are 1Tb, and the new are 1,6Tb

What is the best method to do that ?

put the node in maint mode, and do a pvmove of each and doing after a lv resize 
of each ?

Or there are a easier way  to do that ?  Like what I found in mailing list 
archives ? 
ceph-bluestore-tool --path /var/lib/ceph/osd/ceph-${OSD}
bluefs-bdev-new-db --dev-target /dev/bluesfs_db/db-osd${OSD}

and

ceph-bluestore-tool --path dev/osd1/ --devs-source dev/osd1/block
--dev-target dev/osd1/block.db bluefs-bdev-migrate

For your understanding, we are in last Quincy and hardware is :
12 x 18tb sas
2 x nvme for db/wal

Thanks for advance for your insights
_______________________________________________
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io

Reply via email to