Hi,
sorry i didn't wrote very clear, what i ment was... In the workflow of
* systemctl stop ceph-osd@$ID
* umount /var/lib/ceph/osd/ceph-$ID
* cephadm adopt --style legacy --name osd.$ID
You also need to ```systemctl start ceph-$CLUSTERID@osd.$ID```
After a reboot, my osds are fine and up. I d
Hi,
i habe a similar issue. After migration to Cephadm, the osd services have to
be started manually after every cluster reboot.
Marco
> Am 16.04.2020 um 15:11 schrieb b...@nocloud.ch:
>
> As i progressed with the migration i found out, that my problem is more of a
> rare case.
>
> On my 3
As i progressed with the migration i found out, that my problem is more of a
rare case.
On my 3 nodes, where i had the problem. I did once move the /var/lib/ceph to a
other partition, and symlinked it back. The kernel however is mounting the
tempfs at the real path (/whatever/lib/ceph is mounte
This is a comment for documentation purposes.
Note to slightly-future Zac: Add to
https://docs.ceph.com/docs/octopus/cephadm/adoption/ a step directing the
reader to stop the osd and unmount the tempfs as described in this email
thread.
CEPH DOCUMENTATION INITIATIVE
On Thu, Apr 16, 2020 at 5:47
Hi again,
it is not the first time, just after i posted my question i find a solution :-)
What i needed to do was stopping the osd first:
systemctl stop ceph-osd@0
Then unmounting the tempfs:
umount /var/lib/ceph/osd/ceph-0
So now the script is able to remove the folder, and adopt the