Hi, ceph VERY SLOW with 24 osd(SAMSUNG ssd). fio /dev/rbd0 iodepth=1 direct=1 IOPS only ~200 fio /dev/rbd0 iodepth=32 direct=1 IOPS only ~3000
But test single ssd deive with fio: fio iodepth=1 direct=1 IOPS ~15000 fio iodepth=32 direct=1 IOPS ~30000 Why ceph SO SLOW? Could you give me some help? Appreciated! My Enviroment: [root@szcrh-controller ~]# ceph -s cluster eb26a8b9-e937-4e56-a273-7166ffaa832e health HEALTH_WARN 1 mons down, quorum 0,1,2,3,4 ceph01,ceph02,ceph03,ceph04,ceph05 monmap e1: 6 mons at {ceph01= 10.10.204.144:6789/0,ceph02=10.10.204.145:6789/0,ceph03=10.10.204.146:6789/0,ceph04=10.10.204.147:6789/0,ceph05=10.10.204.148:6789/0,ceph06=0.0.0.0:0/5 } election epoch 6, quorum 0,1,2,3,4 ceph01,ceph02,ceph03,ceph04,ceph05 osdmap e114: 24 osds: 24 up, 24 in flags sortbitwise pgmap v2213: 1864 pgs, 3 pools, 49181 MB data, 4485 objects 144 GB used, 42638 GB / 42782 GB avail 1864 active+clean [root@ceph03 ~]# lsscsi [0:0:6:0] disk ATA SAMSUNG MZ7KM1T9 003Q /dev/sda [0:0:7:0] disk ATA SAMSUNG MZ7KM1T9 003Q /dev/sdb [0:0:8:0] disk ATA SAMSUNG MZ7KM1T9 003Q /dev/sdc [0:0:9:0] disk ATA SAMSUNG MZ7KM1T9 003Q /dev/sdd
_______________________________________________ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com