Hi guys,
So recently I was testing our ceph cluster which mainly used for block
usage(rbd).
We have 30 ssd drives total(5 storage nodes,6 ssd drives each node).However the
result of fio is very poor.
We tested the workload on ssd pool with following parameter :
"fio --size=50G \
--ioe
Hi guys,
So our cluster always got osd down due to medium error.Our current action plan
is to replace the defective disk drive.But I was wondering whether it's too
sensitive for ceph to take it down.Or whether our action plan was too simple
and crude.Any advice for this issue will be appreciated