Hi,

i habe a similar issue. After migration to Cephadm, the  osd services have to 
be started manually after every cluster reboot.

Marco

> Am 16.04.2020 um 15:11 schrieb b...@nocloud.ch:
> 
> As i progressed with the migration i found out, that my problem is more of a 
> rare case.
> 
> On my 3 nodes, where i had the problem. I did once move the /var/lib/ceph to 
> a other partition, and symlinked it back. The kernel however is mounting the 
> tempfs at the real path (/whatever/lib/ceph is mounted). I think because of 
> that the cephadm script couldn't unmount correctly.
> 
> On the 2 other nodes, where i didn't hack around, i had no issue.
> 
> But for people having similar problems... after having it migrated, the new 
> service for the osd need to be started manually:
> 
>    systemctl start ceph-$CLUSTERID@osd.$ID
> 
> Yours,
> bbk
> _______________________________________________
> ceph-users mailing list -- ceph-users@ceph.io
> To unsubscribe send an email to ceph-users-le...@ceph.io
_______________________________________________
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io

Reply via email to