Hi Eugen,

It is still failing with exactly the same error, which seems to indicate it 
cannot find a Logical Volume for osd.10.

> If you want to migrate the db to a new device, you need to specify an 
> existing VG and LV, in this case it is not created for you.

I had actually already done that with the vgcreate and lvcreate commands.  Here 
the exact commands I did:

vgcreate cephdb03 /dev/nvme5n1
lvcreate -L 232G -n ceph-osd-db1
lvcreate -L 232G -n ceph-osd-db2
lvcreate -L 232G -n ceph-osd-db3
lvcreate -L 232G -n ceph-osd-db4

So I did already have the LVM volumes created.

> And I'm not really sure why you run 'ceph-volume lvm activate --all 
> --no-systemd', that's not necessary

That is what was in the article I linked in my first post: ' 
https://docs.clyso.com/blog/ceph-volume-create-wal-db-on-separate-device-for-existing-osd/'

I followed the rest of your steps exactly:

>From general cephadm shell (which I was logged in to from another node - 
>'cephadm shell --', though I am not sure if I could have done this from the 
>OSD 10 shell also?):

ceph orch daemon stop osd.10

I confirmed OSD 10 was marked "Down" in the web admin.  Then from the host the 
OSD and NVME WAL/DB drive is on:

root@cephnone03:~# cephadm shell --name osd.10
root@cephnode03:/# ceph-volume lvm new-db --osd-id 10 --osd-fsid 
474264fe-b00e-11ee-b586-ac1f6b0ff21a --target cephdb03/ceph-osd-db1

and then that is where I get the error.

My OSDs are definitely using LVM and I can see them if I run 'lvdisplay' or 
'lvs', so I am not sure why ceph-volume says it can't find a LV for OSD 10.

-----Original Message-----
From: Eugen Block <ebl...@nde.ag> 
Sent: February 3, 2025 11:30
To: ceph-users@ceph.io
Subject: [ceph-users] Re: cephadm: Move DB/WAL from HDD to SSD

*** This is an EXTERNAL email. Please exercise caution. DO NOT open attachments 
or click links from unknown senders or unexpected email. ***


I'm not sure why it fails, but it seems like you deviate a bit from the 
instructions. If you want to migrate the db to a new device, you need to 
specify an existing VG and LV, in this case it is not created for you.
And I'm not really sure why you run 'ceph-volume lvm activate --all 
--no-systemd', that's not necessary. So I'll try to provide a complete list of 
steps, hopefully that works for you as it does for me:

1. soc9-ceph:~ # vgcreate ceph-db /dev/vdf 2. soc9-ceph:~ # lvcreate -L 5G -n 
ceph-osd0-db ceph-db (mind the LV size, just a test cluster here) 3. 
soc9-ceph:~ # ceph orch daemon stop osd.0 4. soc9-ceph:~ # cephadm shell --name 
osd.0 5. [ceph: root@soc9-ceph /]# ceph-volume lvm new-db --osd-id 0 --osd-fsid 
fb69ba54-4d56-4c90-a855-6b350d186df5 --target ceph-db/ceph-osd0-db 6. [ceph: 
root@soc9-ceph /]# ceph-volume lvm migrate --osd-id 0 --osd-fsid 
fb69ba54-4d56-4c90-a855-6b350d186df5 --from /var/lib/ceph/osd/ceph-0/block 
--target ceph-db/ceph-osd0-db 7. Exit shell 8. soc9-ceph:~ # ceph orch daemon 
start osd.0 9. Verify db config: soc9-ceph:~ # ceph tell osd.0 perf dump bluefs 
| jq -r '.[].db_total_bytes,.[].db_used_bytes'
5368700928
47185920

So as you see, the OSD has picked up the new db device and uses 47 MB (it's an 
empty test cluster). Also note that this is a single-node cluster, so the 
orchestrator commands and shell commands are all executed on the same host.

Let me know how it goes.


Zitat von Alan Murrell <a...@t-net.ca>:

> Ok, just gave it a try and I am still running into an error.  Here is 
> exactly what I did:
>
> I logged on to my host where osd.10 is
>
> Deleted my current VG and LV's on my NVME that will hold the WAL/DB, 
> as I kind of liked what you used.  My VG is called "cephdb03" and my 
> LVs are called "ceph-osd-dbX", where "X" is 1 through 4.
>
> Ran the command to stop osd.10 service:
>
> systemctl stop ceph-474264fe-b00e-11ee-b586-ac1f6b0ff21a@osd.10
>
> connected to the general cephadm shell and ran:
>
> ceph-volume lvm activate --all --no-systemd
>
> Exited the general shel and entered the container for OSD 10:
>
> cephadm shell name osd.10
>
> Ran the ceph-volume command to create the new DB on cephdb03/ceph-osd-db1 :
>
> ceph-volume lvm new-db --osd-id 10 --osd-fsid 
> 474264fe-b00e-11ee-b586-ac1f6b0ff21a --target cephdb03/ceph-osd-db1
>
> Got the following error:
>
> --> Unable to find any LV for source OSD: id:10
> fsid:474264fe-b00e-11ee-b586-ac1f6b0ff21a
> Unexpected error, terminating
>
> _______________________________________________
> ceph-users mailing list -- ceph-users@ceph.io To unsubscribe send an 
> email to ceph-users-le...@ceph.io


_______________________________________________
ceph-users mailing list -- ceph-users@ceph.io To unsubscribe send an email to 
ceph-users-le...@ceph.io
_______________________________________________
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io

Reply via email to