Hi,

I have made a test ceph cluster of 6 OSD's and 03 MON. I am testing the
read write performance for the test cluster and the read IOPS is  poor.
When I individually test it for each HDD, I get good performance, whereas,
when I test it for ceph cluster, it is poor.

Between nodes, using iperf, I get good bandwidth.

My cluster info :

root@ceph-node3:~# ceph --version
ceph version 9.0.2-752-g64d37b7 (64d37b70a687eb63edf69a91196bb124651da210)
root@ceph-node3:~# ceph -s
    cluster 9654468b-5c78-44b9-9711-4a7c4455c480
     health HEALTH_OK
     monmap e9: 3 mons at {ceph-node10=
192.168.1.210:6789/0,ceph-node17=192.168.1.217:6789/0,ceph-node3=192.168.1.203:6789/0
}
            election epoch 442, quorum 0,1,2
ceph-node3,ceph-node10,ceph-node17
     osdmap e1850: 6 osds: 6 up, 6 in
      pgmap v17400: 256 pgs, 2 pools, 9274 MB data, 2330 objects
            9624 MB used, 5384 GB / 5394 GB avail
                 256 active+clean


I have mapped an RBD block device to client machine (Ubuntu 14) and from
there, when I run tests using FIO, i get good write IOPS, however, read is
poor comparatively.

*Write IOPS : 44618 approx*

*Read IOPS : 7356 approx*

*Pool replica - single  *

*pool 1 'test1' replicated size 1 min_size 1 *

I have implemented *rbd_readahead* in my ceph conf file also.
Any suggestions in this regard with help me..

Thanks.

Daleep Singh Bais
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to