# ceph-volume lvm zap --destroy
osvg-sdd-db/2ukzAx-g9pZ-IyxU-Sp9h-fHv2-INNY-1vTpvz
Running command: /usr/sbin/cryptsetup status /dev/mapper/
--> Zapping: osvg-sdd-db/2ukzAx-g9pZ-IyxU-Sp9h-fHv2-INNY-1vTpvz
--> Destroying physical volume
osvg-sdd-db/2ukzAx-g9pZ-IyxU-Sp9h-fHv2-INNY-1vTpvz because --de
On Thu, Apr 18, 2019 at 3:01 PM Sergei Genchev wrote:
>
> Thank you Alfredo
> I did not have any reasons to keep volumes around.
> I tried using ceph-volume to zap these stores, but none of the command
> worked, including yours 'ceph-volume lvm zap
> osvg-sdd-db/2ukzAx-g9pZ-IyxU-Sp9h-fHv2-INNY-
Thank you Alfredo
I did not have any reasons to keep volumes around.
I tried using ceph-volume to zap these stores, but none of the command
worked, including yours 'ceph-volume lvm zap
osvg-sdd-db/2ukzAx-g9pZ-IyxU-Sp9h-fHv2-INNY-1vTpvz'
I ended up manually removing LUKS volumes and then deleting
On Thu, Apr 18, 2019 at 10:55 AM Sergei Genchev wrote:
>
> Hello,
> I have a server with 18 disks, and 17 OSD daemons configured. One of the OSD
> daemons failed to deploy with ceph-deploy. The reason for failing is
> unimportant at this point, I believe it was race condition, as I was running
Hello,
I have a server with 18 disks, and 17 OSD daemons configured. One of the
OSD daemons failed to deploy with ceph-deploy. The reason for failing is
unimportant at this point, I believe it was race condition, as I was
running ceph-deploy inside while loop for all disks in this server.
Now I