I created a file which has the following parameters
[random-read] rw=randread size=128m directory=/root/asd ioengine=libaio bs=4k #numjobs=8 iodepth=64 Br,T -----Original Message----- From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of Mark Nelson Sent: 30. kesäkuuta 2015 20:55 To: ceph-users@lists.ceph.com Subject: Re: [ceph-users] Very low 4k randread performance ~1000iops Hi Tuomos, Can you paste the command you ran to do the test? Thanks, Mark On 06/30/2015 12:18 PM, Tuomas Juntunen wrote: > Hi > > Its not probably hitting the disks, but that really doesnt matter. > The point is we have very responsive VMs while writing and that is > what the users will see. > > The iops we get with sequential read is good, but the random read is > way too low. > > Is using SSDs as OSDs the only way to get it up? or is there some > tunable which would enhance it? I would assume Linux caches reads in > memory and serves them from there, but atleast now we dont see it. > > Br, > > Tuomas > > *From:*Somnath Roy [mailto:somnath....@sandisk.com] > *Sent:* 30. kesäkuuta 2015 19:24 > *To:* Tuomas Juntunen; 'ceph-users' > *Subject:* RE: [ceph-users] Very low 4k randread performance ~1000iops > > Break it down, try fio-rbd to see what is the performance you getting.. > > But, I am really surprised you are getting > 100k iops for write, did > you check it is hitting the disks ? > > Thanks & Regards > > Somnath > > *From:*ceph-users [mailto:ceph-users-boun...@lists.ceph.com] *On > Behalf Of *Tuomas Juntunen > *Sent:* Tuesday, June 30, 2015 8:33 AM > *To:* 'ceph-users' > *Subject:* [ceph-users] Very low 4k randread performance ~1000iops > > Hi > > I have been trying to figure out why our 4k random reads in VMs are > so bad. I am using fio to test this. > > Write : 170k iops > > Random write : 109k iops > > Read : 64k iops > > Random read : 1k iops > > Our setup is: > > 3 nodes with 36 OSDs, 18 SSDs one SSD for two OSDs, each node has > 64gb mem & 2x6core cpus > > 4 monitors running on other servers > > 40gbit infiniband with IPoIB > > Openstack : Qemu-kvm for virtuals > > Any help would be appreciated > > Thank you in advance. > > Br, > > Tuomas > > ---------------------------------------------------------------------- > -- > > > PLEASE NOTE: The information contained in this electronic mail message > is intended only for the use of the designated recipient(s) named above. > If the reader of this message is not the intended recipient, you are > hereby notified that you have received this message in error and that > any review, dissemination, distribution, or copying of this message is > strictly prohibited. If you have received this communication in error, > please notify the sender by telephone or e-mail (as shown above) > immediately and destroy any and all copies of this message in your > possession (whether hard copies or electronically stored copies). > > > > _______________________________________________ > ceph-users mailing list > ceph-users@lists.ceph.com > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com > _______________________________________________ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com _______________________________________________ ceph-users mailing list ceph-users@lists.ceph.com http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com