Hello,
octopus 15.2.4
just as a test, I put my OSDs each inside of a LXD container. Set up
cephFS and mounted it inside a LXD container and it works.
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph
Hello,
I started a new fresh ceph cluster and have the exact same problem and
also the slow op warnings.
I found this bug report that seems to be about this problem:
https://158.69.68.89/issues/46743
"... mgr/devicehealth: device_health_metrics pool gets created even
without any OSDs in the clus
Hello,
on my system it solved it but then a different node suddenly started
the same error. I tried it on the new problem and it did not help.
I notice on:
https://tracker.ceph.com/issues/45726
it says resolved, but on next version v15.2.5
___
ceph
Hello,
Yes, make sure docker & ntp is setup on the new node first.
Also, make sure the public key is added on the new node and firewall
is allowing it through
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le
Hello,
is podman installed on the new node? also make sure the NTP time sync
is on for new node. The ceph orch checks those on the new node and
then dies if not ready with an error like you see.
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubs
Hello,
same here. My fix was to examine the keyring file in the misbehaving
server and compare to a different server, I found the file had the key
but was missing:
caps mgr = "profile crash"
caps mon = "profile crash"
I added that back in and now its OK.
/var/lib/ceph/./crash.node1/keyring
No
Hello,
I am new to CEPH and on a few test servers attempting to setup and
learn a test ceph system.
I started off the install with the "Cephadm" option and it uses podman
containers.
Followed steps here:
https://docs.ceph.com/docs/master/cephadm/install/
I ran the bootstrap, added remote hosts,