Hello,
Can someone please let me know what failure domain my erasure code pool is, osd
or host?
We thought we testing this by turning off 2 hosts, we have had one host offline
recently and the cluster was still serving clients - did we get lucky?
ceph osd pool get crush_rule
crush_rule: ecpool
I just came across this after upgrading to 16.2.6 from octopus on centos8-stream
None of my OSDs would start after the host had been rebooted post upgrade
In systemctl log
Error: container_linux.go:380: starting container process caused:
process_linux.go:545: container init caused: rootfs_linu
) pacific (stable)
Any ideas on how to debug?
Regards
Adam Witwicki
___
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io