One last question/clarification: when it becomes necessary to replace that SSD, 
I assume the process of moving the DB/WAL to the new SSD is the same, except in 
the 'ceph-volume migrate' command I would specify '--from db' now, yes?

Sent from my mobile device.  Please excuse brevity and ttpos.

On Feb 4, 2025 00:18, Eugen Block <ebl...@nde.ag> wrote:
*** This is an EXTERNAL email. Please exercise caution. DO NOT open attachments 
or click links from unknown senders or unexpected email. ***


I'm glad it worked. I thought those steps had already been added to
the docs (cephadm specific), but I couldn't find them either. I'll
ping Zac about it.

Zitat von Alan Murrell <a...@t-net.ca>:

> OK, I think I am good now.  I have completed the rest of the steps
> (exit shell, start the osd.10 daemon back up again, waited for it to
> be marked as "Up") and then I ran:
>
> ceph osd metadata 10
>
> and all the bluestore_db items are pointing to the SSD now.
>
> OK, so now to note these steps/commands in to "Ceph Notes" and the
> repeat the process for the other HDDs on this node and my other four 😊
>
> Thank you *so* much for your help (and patience!)
> _______________________________________________
> ceph-users mailing list -- ceph-users@ceph.io
> To unsubscribe send an email to ceph-users-le...@ceph.io


_______________________________________________
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io

_______________________________________________
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io

Reply via email to