We're running journals on NVMe as well - SLES
before rebooting try deleting the links here:
/etc/systemd/system/ceph-osd.target.wants/
if we delete first it boots ok
if we don't delete the disks sometimes don't come up and we have to
ceph-disk activate all
HTH
Thanks Joe
>>> David Turne
I have this issue with my NVMe OSDs, but not my HDD OSDs. I have 15 HDD's
and 2 NVMe's in each host. We put most of the journals on one of the
NVMe's and a few on the second, but added a small OSD partition to the
second NVMe for RGW metadata pools.
When restarting a server manually for testing,
Hi,
On 14/09/17 16:26, Götz Reinicke wrote:
> maybe someone has a hint: I do have a cephalopod cluster (6 nodes, 144
> OSDs), Cents 7.3 ceph 10.2.7.
>
> I did a kernel update to the recent centos 7.3 one on a node and did a
> reboot.
>
> After that, 10 OSDs did not came up as the others. The di
Hi,
maybe someone has a hint: I do have a cephalopod cluster (6 nodes, 144 OSDs),
Cents 7.3 ceph 10.2.7.
I did a kernel update to the recent centos 7.3 one on a node and did a reboot.
After that, 10 OSDs did not came up as the others. The disk did not get mounted
and the OSD processes did noth