Re: [ceph-users] New cluster performance analysis

2015-12-10 Thread Adrien Gillard
Hi Kris, Indeed I am seeing some spikes on the latency, they seem to be linked to other spikes on throughput and cluster global IOPS. I also see some spikes on the OSD (I guess this is when the journal is flushed) but IO on the journals are quite steady. I already tuned a bit the osd filestore and

Re: [ceph-users] New cluster performance analysis

2015-12-09 Thread Kris Gillespie
One thing I noticed with all my testing, as the speed difference between the SSDs and the spinning rust can be quite high and as your journal needs to flush every X bytes (configurable), the impact of this flush can be hard, as IO to this journal will stop until it’s finished (I believe). Someth

Re: [ceph-users] New cluster performance analysis

2015-12-04 Thread Jan Schermer
> On 04 Dec 2015, at 14:31, Adrien Gillard wrote: > > After some more tests : > > - The pool being used as cache pool has no impact on performance, I get the > same results with a "dedicated" replicated pool. > - You are right Jan, on raw devices I get better performance on a volume if > I

Re: [ceph-users] New cluster performance analysis

2015-12-04 Thread Adrien Gillard
After some more tests : - The pool being used as cache pool has no impact on performance, I get the same results with a "dedicated" replicated pool. - You are right Jan, on raw devices I get better performance on a volume if I fill it first, or at least if I write a zone that already has been al

Re: [ceph-users] New cluster performance analysis

2015-12-03 Thread Adrien Gillard
I did some more tests : fio on a raw RBD volume (4K, numjob=32, QD=1) gives me around 3000 IOPS I also tuned xfs mount options on client (I realized I didn't do that already) and with "largeio,inode64,swalloc,logbufs=8,logbsize=256k,attr2,auto,nodev,noatime,nodiratime" I get better performance :

Re: [ceph-users] New cluster performance analysis

2015-12-03 Thread Nick Fisk
Couple of things to check 1. Can you create just a normal non cached pool and test performance to rule out any funnies going on there. 2. Can you also run something like iostat during the benchmarks and see if it looks like all your disks are getting saturated.

Re: [ceph-users] New cluster performance analysis

2015-12-02 Thread Jan Schermer
> Let's take IOPS, assuming the spinners can do 50 (4k) synced sustained IOPS > (I hope they can do more ^^), we should be around 50x84/3 = 1400 IOPS, which > is far from rados bench (538) and fio (847). And surprisingly fio numbers are > greater than rados. > I think the missing factor here i