On 05/04/17 13:37, Fuxion Cloud wrote: > Hi, > > Our ceph version is 0.80.7. We used it with the openstack as a block > storage RBD. The ceph storage configured with 3 replication of data. > I'm getting low IOPS (400) from fio benchmark in random readwrite. > Please advise how to improve it. Thanks. I'll let others comment on whether 0.80.7 is too old and you should obviously upgrade... I don't think anyone should be using anything older than hammer which is the previous nearly EoL LTS version. > > Here's the hardware info. > 12 x storage nodes > - 2 x cpus (12 cores) > - 64 GB RAM > - 10 x 4TB SAS 7.2krpm OSD > - 2 x 200GB SSD Journal > - 2 x 200GB SSD OS 5 osds per journal sounds like too many.
Which model are the SSDs? How large are the journals? When you run your fio test, what is the command you run? which type of storage? rbd fuse? krbd? fio --engine=rbd? When you run fio, what does iostat show you? Would you say the HDDs are the bottleneck, or the SSDs? iostat -xm 1 /dev/sd[a-z] > - 2 x 10Gb (bond - ceph network) > - 2 x 10Gb (bond - openstack network) What kind of link do you have between racks? What is the failure domain? rack or host? What is the size (replication size) of the pool you are testing? > > Ceph status: > > health HEALTH_OK > monmap e1: 3 mons at > {node1=10.10.10.11:6789/0,node2=10.10.10.12:6789/0,node7=10.10.10.17:6789/0 > <http://10.10.10.11:6789/0,node2=10.10.10.12:6789/0,node7=10.10.10.17:6789/0>}, > election epoch 1030, quorum 0,1,2 node1,node2,node7 > osdmap e116285: 120 osds: 120 up, 120 in > pgmap v70119491: 14384 pgs, 5 pools, 5384 GB data, 841 kobjects > 16774 GB used, 397 TB / 413 TB avail > 14384 active+clean > client io 11456 kB/s rd, 13389 kB/s wr, 420 op/s > > Ceph osd tree: > # idweighttype nameup/downreweight > -1414root default > -14207rack rack1 > -334.5host node1 > 13.45osd.1up1 > 43.45osd.4up1 > 73.45osd.7up1 > 103.45osd.10up1 > 133.45osd.13up1 > 163.45osd.16up1 > 193.45osd.19up1 > 223.45osd.22up1 > 253.45osd.25up1 > 283.45osd.28up1 > -434.5host node2 > 53.45osd.5up1 > 113.45osd.11up1 > 143.45osd.14up1 > 173.45osd.17up1 > 203.45osd.20up1 > 233.45osd.23up1 > 263.45osd.26up1 > 293.45osd.29up1 > 383.45osd.38up1 > 23.45osd.2up1 > -534.5host node3 > 313.45osd.31up1 > 483.45osd.48up1 > 573.45osd.57up1 > 663.45osd.66up1 > 753.45osd.75up1 > 843.45osd.84up1 > 933.45osd.93up1 > 1023.45osd.102up1 > 1113.45osd.111up1 > 393.45osd.39up1 > -734.5host node4 > 353.45osd.35up1 > 463.45osd.46up1 > 553.45osd.55up1 > 643.45osd.64up1 > 723.45osd.72up1 > 813.45osd.81up1 > 903.45osd.90up1 > 983.45osd.98up1 > 1073.45osd.107up1 > 1163.45osd.116up1 > -1034.5host node5 > 433.45osd.43up1 > 543.45osd.54up1 > 603.45osd.60up1 > 673.45osd.67up1 > 783.45osd.78up1 > 873.45osd.87up1 > 963.45osd.96up1 > 1043.45osd.104up1 > 1133.45osd.113up1 > 83.45osd.8up1 > -1334.5host node6 > 323.45osd.32up1 > 473.45osd.47up1 > 563.45osd.56up1 > 653.45osd.65up1 > 743.45osd.74up1 > 833.45osd.83up1 > 923.45osd.92up1 > 1103.45osd.110up1 > 1193.45osd.119up1 > 1013.45osd.101up1 > -15207rack rack2 > -234.5host node7 > 03.45osd.0up1 > 33.45osd.3up1 > 63.45osd.6up1 > 93.45osd.9up1 > 123.45osd.12up1 > 153.45osd.15up1 > 183.45osd.18up1 > 213.45osd.21up1 > 243.45osd.24up1 > 273.45osd.27up1 > -634.5host node8 > 303.45osd.30up1 > 403.45osd.40up1 > 493.45osd.49up1 > 583.45osd.58up1 > 683.45osd.68up1 > 773.45osd.77up1 > 863.45osd.86up1 > 953.45osd.95up1 > 1053.45osd.105up1 > 1143.45osd.114up1 > -834.5host node9 > 333.45osd.33up1 > 453.45osd.45up1 > 523.45osd.52up1 > 593.45osd.59up1 > 733.45osd.73up1 > 823.45osd.82up1 > 913.45osd.91up1 > 1003.45osd.100up1 > 1083.45osd.108up1 > 1173.45osd.117up1 > -934.5host node10 > 363.45osd.36up1 > 423.45osd.42up1 > 513.45osd.51up1 > 613.45osd.61up1 > 693.45osd.69up1 > 763.45osd.76up1 > 853.45osd.85up1 > 943.45osd.94up1 > 1033.45osd.103up1 > 1123.45osd.112up1 > -1134.5host node11 > 503.45osd.50up1 > 633.45osd.63up1 > 713.45osd.71up1 > 793.45osd.79up1 > 893.45osd.89up1 > 1063.45osd.106up1 > 1153.45osd.115up1 > 343.45osd.34up1 > 1203.45osd.120up1 > 1213.45osd.121up1 > -1234.5host node12 > 373.45osd.37up1 > 443.45osd.44up1 > 533.45osd.53up1 > 623.45osd.62up1 > 703.45osd.70up1 > 803.45osd.80up1 > 883.45osd.88up1 > 993.45osd.99up1 > 1093.45osd.109up1 > 1183.45osd.118up1 > > > Thanks, > James > > On Thu, May 4, 2017 at 5:06 PM, Christian Wuerdig > <christian.wuer...@gmail.com <mailto:christian.wuer...@gmail.com>> wrote: > > > > On Thu, May 4, 2017 at 7:53 PM, Fuxion Cloud > <fuxioncl...@gmail.com <mailto:fuxioncl...@gmail.com>> wrote: > > Hi all, > > Im newbie in ceph technology. We have ceph deployed by vendor > 2 years ago with Ubuntu 14.04LTS without fine tuned the > performance. I noticed that the performance of storage is very > slow. Can someone please help to advise how to improve the > performance? > > > You really need to provide a bit more information than that. Like > what hardware is involved (CPU, RAM, how many nodes, how many > OSDs, what kind of disks, what size disks, networking hardware), > how you use ceph (RBD, RGW, CephFS, plain RADOS object storage). > > Outputs of > > ceph status > ceph osd tree > ceph df > > also provide useful information. > > Also what does "slow performance" mean - how have you determined > that (throughout, latency)? > > > Any changes or configuration require for OS kernel? > > Regards, > James > > _______________________________________________ > ceph-users mailing list > ceph-users@lists.ceph.com <mailto:ceph-users@lists.ceph.com> > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com > <http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com> > > > > > > _______________________________________________ > ceph-users mailing list > ceph-users@lists.ceph.com > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com -- -------------------------------------------- Peter Maloney Brockmann Consult Max-Planck-Str. 2 21502 Geesthacht Germany Tel: +49 4152 889 300 Fax: +49 4152 889 333 E-mail: peter.malo...@brockmann-consult.de Internet: http://www.brockmann-consult.de --------------------------------------------
_______________________________________________ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com