./dm-22
lrwxrwxrwx 1 root root 11 Mar 16 11:37
dm-uuid-part1-LVM-n1SH1FvtfjgxJOMWN9aHurFvn2BpIsLZi89GWxA68hLmUQV6l5oyiEOPsFciRbKg
-> ../../dm-22
~ # ls -al /dev/disk/by-parttypeuuid | grep dm-22
lrwxrwxrwx 1 root root 11 Mar 16 11:37
45b0969e-9b03-4f30-b4c6-b4b80ceff106.120c536d-cb30-4cea-b607-d
/dev/dm- s are already owned by
ceph:ceph?
Thank you very much for reading.
Best Regards,
Nicholas.
On Wed, Mar 15, 2017 at 1:06 AM Gunwoo Gim wrote:
> Thank you very much, Peter.
>
> I'm sorry for not clarifying the version number; it's kraken and
> 11.2.0-1xenial.
>
f30-b4c6-b4b80ceff106
~ # udevadm info /dev/mapper/vg--ssd1-lv--ssd1p1 | grep DEVTYPE
E: DEVTYPE=disk
Best Regards,
Nicholas.
On Tue, Mar 14, 2017 at 6:37 PM Peter Maloney <
peter.malo...@brockmann-consult.de> wrote:
> Is this Jewel? Do you have some udev rules or anything that chan
I'd love to get helped out; it'd be much appreciated.
Best Wishes,
Nicholas.
On Tue, Mar 14, 2017 at 4:51 PM Gunwoo Gim wrote:
> Hello, I'm trying to deploy a ceph filestore cluster with LVM using
> ceph-ansible playbook. I've been fixing a couple of code blocks
Hello, I'm trying to deploy a ceph filestore cluster with LVM using
ceph-ansible playbook. I've been fixing a couple of code blocks in
ceph-ansible and ceph-disk/main.py and made some progress but now I'm stuck
again; 'ceph-disk activate osd' fails.
Please let me just show you the error message