> ceph-volume lvm new-db --osd-id 10 --osd-fsid 
> fb0e0a45-75a0-4400-9b1f-7568f185544c --target cephdb03/ceph-osd-db1

OMG, that worked.  I hit a snag on the "migrate" command, but I think I know 
what I need to do, but wanted to confirm.

When I ran:

ceph-volume lvm migrate --osd-id 10 --osd-fsid 
fb0e0a45-75a0-4400-9b1f-7568f185544c --from /var/lib/ceph/osd/ceph-10/block 
--target cephdb03/ceph-osd-db1

I got the following error:

ceph-volume lvm migrate: error: argument --from: invalid choice: 
'/var/lib/ceph/osd/ceph-10/block' (choose from 'data', 'db', 'wal')

so I *think* I need to change the command to:

ceph-volume lvm migrate --osd-id 10 --osd-fsid 
fb0e0a45-75a0-4400-9b1f-7568f185544c --from db --target cephdb03/ceph-osd-db1

and I think that takes care of both the 'db' and the 'wal'?  I recall reading 
in another help doc that specifying just the 'db' without also specifying 'wal' 
puts (or moves) the 'db' and the 'wal' on to the same device?
_______________________________________________
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io

Reply via email to