[ceph-users] Re: OSD storage not balancing properly when crush map uses multiple device classes

2022-03-10 Thread David DELON
Hi, i think i have a similar problem with my Octopus cluster. $ ceph osd df | grep ssd ID CLASS WEIGHT REWEIGHT SIZE RAW USE DATA OMAP META AVAIL %USE VAR PGS STATUS 31 ssd 0.36400 1.0 373 GiB 289 GiB 258 GiB 91 MiB 31 GiB 84 GiB 77.46 1.20 110 up ===> not rebalanced, is it normal? 46 s

[ceph-users] librados behavior when some OSDs are unreachables

2020-01-28 Thread David DELON
Hi, i had a problem with one application (seafile) which uses CEPH backend with librados. The corresponding pools are defined with size=3 and each object copy is on a different host. The cluster health is OK: all the monitors see all the hosts. Now, a network problem just happens between my