Hi,
i think i have a similar problem with my Octopus cluster.
$ ceph osd df | grep ssd
ID CLASS WEIGHT REWEIGHT SIZE RAW USE DATA OMAP META AVAIL %USE VAR PGS STATUS
31 ssd 0.36400 1.0 373 GiB 289 GiB 258 GiB 91 MiB 31 GiB 84 GiB 77.46 1.20
110 up ===> not rebalanced, is it normal?
46 s
Hi,
i had a problem with one application (seafile) which uses CEPH backend with
librados.
The corresponding pools are defined with size=3 and each object copy is on a
different host.
The cluster health is OK: all the monitors see all the hosts.
Now, a network problem just happens between my