Answering to myself and to whom may be interested. After some strace
and better looking in logs, I realized that the cluster knew different
fsids for my redeployed OSDs, so I realized that I did not 'rm' the
OSDs before readding them to the cluster
So the fact is that ceph does not update OSD fsid
Hello,
in the process of redeploying some OSDs in our cluster, after
destroying one of them (down, out, remove from crushmap) and trying to
redeploy it (crush add ,start), we reach a state where the OSD gets
stuck at booting state:
root@staging-rd0-02:~# ceph daemon osd.12 status
{ "cluster_fsid":