You are right again.
Thank you.
---
However is this really the right way (for all IO to stop) since cluster
have enough capacity to rebalance?
Why does not rebalance algorithm prevent one osd to be "too full"?
Rok
On Sun, Dec 22, 2024 at 12:00 AM Eugen Block wrote:
> The full OSD is most li
The full OSD is most likely the reason. You can temporarily increase
the threshold to 0.97 or so, but you need to prevent that to happen.
The cluster usually starts warning you at 85%.
Zitat von Rok Jaklič :
Hi,
for some reason radosgw stopped working.
Cluster status:
[root@ctplmon1 ~]# c
Hi,
for some reason radosgw stopped working.
Cluster status:
[root@ctplmon1 ~]# ceph -v
ceph version 17.2.8 (f817ceb7f187defb1d021d6328fa833eb8e943b3) quincy
(stable)
[root@ctplmon1 ~]# ceph -s
cluster:
id: 0a6e5422-ac75-4093-af20-528ee00cc847
health: HEALTH_ERR
6 OSD(s)
Backfill proceeds in a make-before-break fashion to safeguard data, because
Ceph is first and foremost about strong consistency. Say you have a 3R
(replicated, size=3) pool and you make a change that moves data around.
For a given PG, Ceph will complete a fourth copy of data before removing one