What were you using for iodepth and numjobs? If you’re getting an average of 
2ms per operation, and you’re single threaded, I’d expect about 500 IOPS / 
thread, until you hit the limit of your QEMU setup, which may be a single IO 
thread. That’s also what I think Mike is alluding to.

Warren

From: Sean Redmond <sean.redmo...@gmail.com<mailto:sean.redmo...@gmail.com>>
Date: Wednesday, November 18, 2015 at 6:39 AM
To: "ceph-us...@ceph.com<mailto:ceph-us...@ceph.com>" 
<ceph-us...@ceph.com<mailto:ceph-us...@ceph.com>>
Subject: [ceph-users] All SSD Pool - Odd Performance

Hi,

I have a performance question for anyone running an SSD only pool. Let me 
detail the setup first.

12 X Dell PowerEdge R630 ( 2 X 2620v3 64Gb RAM)
8 X intel DC 3710 800GB
Dual port Solarflare 10GB/s NIC (one front and one back)
Ceph 0.94.5
Ubuntu 14.04 (3.13.0-68-generic)

The above is in one pool that is used for QEMU guests, A 4k FIO test on the SSD 
directly yields around 55k Iops, the same test inside a QEMU guest seems to hit 
a limit around 4k Iops. If I deploy multiple guests they can all reach 4K Iops 
simultaneously.

I don't see any evidence of a bottle neck on the OSD hosts,Is this limit inside 
the guest expected or I am just not looking deep enough yet?

Thanks

This email and any files transmitted with it are confidential and intended 
solely for the individual or entity to whom they are addressed. If you have 
received this email in error destroy it immediately. *** Walmart Confidential 
***
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to