Hmm, the problem is I had not modified any config, all the config
is default.
as you said, all the IO should be stopped by the configs
"mon_osd_full_ration" or "osd_failsafe_full_ration". In my test, when
the osd near full, the IO from "rest bench" stopped, but the backfill
IO did not stop.
You shouldn't let the cluster get so full that losing a few OSDs will make
you go toofull. Letting the cluster get to 100% full is such a bad idea
that you should make sure it doesn't happen.
Ceph is supposed to stop moving data to an OSD once that OSD hits
osd_backfill_full_ratio, which default
hi, craig:
Your solution did work very well. But if the data is very
important, when remove directory of PG from OSDs, a small mistake will
result in loss of data. And if cluster is very large, do not you think
delete the data on the disk from 100% to 95% is a tedious and
error-prone thing, fo
At this point, it's probably best to delete the pool. I'm assuming the
pool only contains benchmark data, and nothing important.
Assuming you can delete the pool:
First, figure out the ID of the data pool. You can get that from ceph osd
dump | grep '^pool'
Once you have the number, delete the d
hello, every one:
These days a problem of "ceph" has troubled me for a long time.
I build a cluster with 3 hosts and each host has three osds in it.
And after that
I used the command "rados bench 360 -p data -b 4194304 -t 300 write
--no-cleanup"
to test the write performance of the cluste