What queue depth are you testing at?

 

You will struggle to get much more than about 500iops for a single threaded 
write, no matter what the backing disk is.

 

Nick

 

From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of 
lixuehui...@126.com
Sent: 27 May 2015 00:55
To: Vasiliy Angapov; Karsten Heymann
Cc: ceph-users
Subject: Re: [ceph-users] SSD IO performance

 

Hi,
Sorry for all  , the network is 1000Mbit/s ,I've a state erroe before!  The 
network is not 100Mbit/s .

 

  _____  

lixuehui...@126.com <mailto:lixuehui...@126.com> 

 

From: Vasiliy Angapov <mailto:anga...@gmail.com> 

Date: 2015-05-26 22:36

To: Karsten Heymann <mailto:karsten.heym...@gmail.com> ; lixuehui...@126.com 
<mailto:lixuehui...@126.com> 

CC: ceph-users <mailto:ceph-users@lists.ceph.com> 

Subject: Re: [ceph-users] SSD IO performance

Hi,

 

Hi, I guess the author here means that for random loads 100Mb network should 
generate 2500-3000 IOPS for 4k blocks.

So the complaint is reasonable, I suppose.

 

Regards, Vasily.  

 

On Tue, May 26, 2015 at 5:27 PM, Karsten Heymann <karsten.heym...@gmail.com 
<mailto:karsten.heym...@gmail.com> > wrote:

Hi ,

you should definitely increase the speed of the network. 100Mbit/s is
way too slow for all use cases I could think of, as it results in a
maximum data transfer of less than 10 Mbyte per second, which is
slower than a usb 2.0 thumb drive.

Best,
Karsten


2015-05-26 15:53 GMT+02:00 lixuehui...@126.com <mailto:lixuehui...@126.com>  
<lixuehui...@126.com <mailto:lixuehui...@126.com> >:
>
> Hi ALL:
>     I've built a ceph0.8 cluster including 2 nodes ,which  contains 5
> osds(ssd) each , with 100MB/s network . Testing a rbd device with default
> configuration ,the result is no ideal.To got better performance ,except the
> capability of random r/w  of  SSD, which should to give a change?
>
>     2 nodes  5 osds(SSD) *2  , 1 mon, 32GB RAM
>     100MB/S network
> and now the whole iops is just 500 . Should we change the filestore or
> journal part ? Thanks for any help!
>
> ________________________________
> lixuehui...@126.com <mailto:lixuehui...@126.com> 
>

> _______________________________________________
> ceph-users mailing list
> ceph-users@lists.ceph.com <mailto:ceph-users@lists.ceph.com> 
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com <mailto:ceph-users@lists.ceph.com> 
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

 




_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to