Re: [ceph-users] How to properly clean up bluestore disks

2019-04-18 Thread Sergei Genchev
# ceph-volume lvm zap --destroy osvg-sdd-db/2ukzAx-g9pZ-IyxU-Sp9h-fHv2-INNY-1vTpvz Running command: /usr/sbin/cryptsetup status /dev/mapper/ --> Zapping: osvg-sdd-db/2ukzAx-g9pZ-IyxU-Sp9h-fHv2-INNY-1vTpvz --> Destroying physical volume osvg-sdd-db/2ukzAx-g9pZ-IyxU-Sp9h-fHv2-INNY-1vTpvz because --de

Re: [ceph-users] How to properly clean up bluestore disks

2019-04-18 Thread Alfredo Deza
On Thu, Apr 18, 2019 at 3:01 PM Sergei Genchev wrote: > > Thank you Alfredo > I did not have any reasons to keep volumes around. > I tried using ceph-volume to zap these stores, but none of the command > worked, including yours 'ceph-volume lvm zap > osvg-sdd-db/2ukzAx-g9pZ-IyxU-Sp9h-fHv2-INNY-

Re: [ceph-users] How to properly clean up bluestore disks

2019-04-18 Thread Sergei Genchev
Thank you Alfredo I did not have any reasons to keep volumes around. I tried using ceph-volume to zap these stores, but none of the command worked, including yours 'ceph-volume lvm zap osvg-sdd-db/2ukzAx-g9pZ-IyxU-Sp9h-fHv2-INNY-1vTpvz' I ended up manually removing LUKS volumes and then deleting

Re: [ceph-users] How to properly clean up bluestore disks

2019-04-18 Thread Alfredo Deza
On Thu, Apr 18, 2019 at 10:55 AM Sergei Genchev wrote: > > Hello, > I have a server with 18 disks, and 17 OSD daemons configured. One of the OSD > daemons failed to deploy with ceph-deploy. The reason for failing is > unimportant at this point, I believe it was race condition, as I was running

[ceph-users] How to properly clean up bluestore disks

2019-04-18 Thread Sergei Genchev
Hello, I have a server with 18 disks, and 17 OSD daemons configured. One of the OSD daemons failed to deploy with ceph-deploy. The reason for failing is unimportant at this point, I believe it was race condition, as I was running ceph-deploy inside while loop for all disks in this server. Now I