Fixed, seems even though the block.db/block.wal had correct perms the disk
entry under /dev was missing ceph:ceph ownership after the reboot for some
reason.
Sorry for adding extra emails to your mailbox, but hopefully this may help
someone else one day.
On Mon, Feb 25, 2019 at 11:09 PM Ashley Me
Sorry with log level 20 turned on for bluestore / bluefs
-31> 2019-02-25 15:07:27.842 7f2bfbd71240 10
bluestore(/var/lib/ceph/osd/ceph-8) _open_db initializing bluefs
-30> 2019-02-25 15:07:27.842 7f2bfbd71240 10 bluefs add_block_device
bdev 1 path /var/lib/ceph/osd/ceph-8/block.db
-29> 20
So I was able to change the perms using : chown -h ceph:ceph
/var/lib/ceph/osd/ceph-6/block.db
However now I get the following when starting the OSD which then causes it
to crash
bluefs add_block_device bdev 2 path /var/lib/ceph/osd/ceph-8/block size
8.9 TiB
-1> 2019-02-25 15:03:51.990 7f26
After a reboot of a node I have one particular OSD that won't boot. (Latest
Mimic)
When I "/var/lib/ceph/osd/ceph-8 # ls -lsh"
I get " 0 lrwxrwxrwx 1 root root 19 Feb 25 02:09 block.db -> '/dev/sda5
/dev/sdc5'"
For some reasons it is trying to link block.db to two disks, if I remove
the block.