[ceph-users] fio librbd result is poor

2016-12-18 Thread
Hi guys, So recently I was testing our ceph cluster which mainly used for block usage(rbd). We have 30 ssd drives total(5 storage nodes,6 ssd drives each node).However the result of fio is very poor. We tested the workload on ssd pool with following parameter : "fio --size=50G \ --ioe

[ceph-users] ceph osd down

2016-11-20 Thread
Hi guys, So our cluster always got osd down due to medium error.Our current action plan is to replace the defective disk drive.But I was wondering whether it's too sensitive for ceph to take it down.Or whether our action plan was too simple and crude.Any advice for this issue will be appreciated