Folks,
I have deployed 15 OSDs node clusters using cephadm and encount duplicate
OSD on one of the nodes and am not sure how to clean that up.
root@datastorn1:~# ceph health
HEALTH_WARN 1 failed cephadm daemon(s); 1 pool(s) have no replicas
configured
osd.3 is duplicated on two nodes, i would li
Folks,
I am playing with cephadm and life was good until I started upgrading from
octopus to pacific. My upgrade process stuck after upgrading mgr and in
logs now i can see following error
root@ceph1:~# ceph log last cephadm
2022-09-01T14:40:45.739804+ mgr.ceph2.hmbdla (mgr.265806) 8 :
cephad