The client doesn't hit any bottleneck.  I also tried to run multiple
clients on different host. There's no change.

2014-10-17 14:36 GMT+08:00 Alexandre DERUMIER <aderum...@odiso.com>:

> Hi,
> >>Thanks for the detailed information. but I am already using fio with rbd
> engine. Almost 4 volumes can reach the peak.
>
> What is your cpu usage of fio-rbd ?
> Myself I'm cpu bound on 8cores with around 40000iops read 4K.
>
>
>
> ----- Mail original -----
>
> De: "Mark Wu" <wud...@gmail.com>
> À: "Daniel Schwager" <daniel.schwa...@dtnet.de>
> Cc: ceph-users@lists.ceph.com
> Envoyé: Jeudi 16 Octobre 2014 19:19:17
> Objet: Re: [ceph-users] Performance doesn't scale well on a full ssd
> cluster.
>
>
>
> Thanks for the detailed information. but I am already using fio with rbd
> engine. Almost 4 volumes can reach the peak.
> 2014 年 10 月 17 日 上午 1:03于 wud...@gmail.com 写道:
>
>
>
> Thanks for the detailed information. but I am already using fio with rbd
> engine. Almost 4 volumes can reach the peak.
> 2014 年 10 月 17 日 上午 12:55于 "Daniel Schwager" < daniel.schwa...@dtnet.de
> >写道:
>
> <blockquote>
>
>
>
> Hi Mark,
>
> maybe you will check rbd-enabled fio
>
> http://telekomcloud.github.io/ceph/2014/02/26/ceph-performance-analysis_fio_rbd.html
>
> yum install ceph-devel
> git clone git:// git.kernel.dk/fio.git
> cd fio ; ./configure ; make -j5 ; make install
>
> Setup the number of jobs (==clients) inside fio config to
> numjobs=8
> for simulating multiple clients.
>
>
> regards
> Danny
>
>
> my test.fio:
>
> [global]
> #logging
> #write_iops_log=write_iops_log
> #write_bw_log=write_bw_log
> #write_lat_log=write_lat_log
> ioengine=rbd
> clientname=admin
> pool=rbd
> rbdname=myimage
> invalidate=0 # mandatory
> rw=randwrite
> bs=1m
> runtime=120
> iodepth=8
> numjobs=8
>
> time_based
> #direct=0
>
>
> [seq-write]
> stonewall
> rw=write
>
> #[seq-read]
> #stonewall
> #rw=read
>
>
> </blockquote>
>
> _______________________________________________
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to