Mark, please read this: 
https://www.mail-archive.com/ceph-users@lists.ceph.com/msg12486.html

On 16 Oct 2014, at 19:19, Mark Wu <wud...@gmail.com> wrote:

> 
> Thanks for the detailed information. but I am already using fio with rbd 
> engine. Almost 4 volumes can reach the peak.
> 
> 2014 年 10 月 17 日 上午 1:03于 wud...@gmail.com写道:
> Thanks for the detailed information. but I am already using fio with rbd 
> engine. Almost 4 volumes can reach the peak.
> 
> 2014 年 10 月 17 日 上午 12:55于 "Daniel Schwager" <daniel.schwa...@dtnet.de>写道:
> Hi Mark,
> 
>  
> 
> maybe you will check rbd-enabled fio
> 
>                 
> http://telekomcloud.github.io/ceph/2014/02/26/ceph-performance-analysis_fio_rbd.html
> 
>  
> 
>                 yum install ceph-devel
> 
>                 git clone git://git.kernel.dk/fio.git
> 
>                 cd fio ; ./configure ; make -j5 ; make install
> 
>  
> 
> Setup the number of jobs (==clients) inside fio config to
> 
>                 numjobs=8
> 
> for simulating multiple clients.
> 
>  
> 
>  
> 
> regards
> 
> Danny
> 
>  
> 
>  
> 
> my test.fio:
> 
>  
> 
> [global]
> 
> #logging
> 
> #write_iops_log=write_iops_log
> 
> #write_bw_log=write_bw_log
> 
> #write_lat_log=write_lat_log
> 
> ioengine=rbd
> 
> clientname=admin
> 
> pool=rbd
> 
> rbdname=myimage
> 
> invalidate=0    # mandatory
> 
> rw=randwrite
> 
> bs=1m
> 
> runtime=120
> 
> iodepth=8
> 
> numjobs=8
> 
>  
> 
> time_based
> 
> #direct=0
> 
>  
> 
>  
> 
> [seq-write]
> 
> stonewall
> 
> rw=write
> 
>  
> 
> #[seq-read]
> 
> #stonewall
> 
> #rw=read
> 
> _______________________________________________
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


Cheers.
–––– 
Sébastien Han 
Cloud Architect 

"Always give 100%. Unless you're giving blood."

Phone: +33 (0)1 49 70 99 72 
Mail: sebastien....@enovance.com 
Address : 11 bis, rue Roquépine - 75008 Paris
Web : www.enovance.com - Twitter : @enovance 

Attachment: signature.asc
Description: Message signed with OpenPGP using GPGMail

_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to