Hello guys, 

Was hoping someone could help me with strange read performance problems on 
osds. I have a test setup of 4 kvm host servers which are running about 20 test 
linux vms between them. The vms' images are stored in ceph cluster and accessed 
via rbd. I also have 2 osd servers with replica of 2. The physical specs are at 
the end of the email. 


I've been running some test to check concurrent read/write performance from all 
vms. 


I've simply concurrently fired the following tests on each vm: 


"dd if=/dev/vda of=/dev/null bs=1M count=1000 iflag=direct" and after that 
"dd if=/dev/vda of=/dev/null bs=4M count=1000 iflag=direct" 




While the tests are running i've fired up iostat to monitor osd performance and 
load. I've noticed that during the read tests each osd is reading only about 
25-30MB/s, which seems very poor to me. The osd devices are capable of reading 
around 150-160MB/s when I fire up dd on them. I am aware that osds "loose" 
about 40-50% throughput, so I was expecting to see each osd doing around 
70-80MB/s during my tests. So, what could be the problem here? 


I've checked the networking for errors and I can't see any packet drops or 
other problems with connectivity. I've also ran various networking tests for 
several days bashing servers with high volume of traffic and I've not had any 
problems/packet drops/disconnects. 


My infrastructure setup is as follows: 


1. Software: Ubuntu 12.04.4 with Ceph version 0.72.2, Qemu 1.5 from Ubuntu 
Cloud repo 
2. OSD servers: 2xIntel E5-2620 with 12 cores in total, 24GB Ram and 8 SAS 7k 
rpm disks each 
3. Networking - QDR 40gbit/s Infiniband using IP over Infiniband. 

Many thanks 
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to