Hi.. Udo,
I am not sure I understood what you said.
Did you mean that the 'dd' command also got cached in the osd node? or??


On Sun, Dec 11, 2016 at 10:46 PM, Udo Lembke <ulem...@polarzone.de> wrote:

> Hi,
> but I assume you measure also cache in this scenario - the osd-nodes has
> cached the writes in the filebuffer
> (due this the latency should be very small).
>
> Udo
>
> On 12.12.2016 03:00, V Plus wrote:
> > Thanks Somnath!
> > As you recommended, I executed:
> > dd if=/dev/zero bs=1M count=4096 of=/dev/rbd0
> > dd if=/dev/zero bs=1M count=4096 of=/dev/rbd1
> >
> > Then the output results look more reasonable!
> > Could you tell me why??
> >
> > Btw, the purpose of my run is to test the performance of rbd in ceph.
> > Does my case mean that before every test, I have to "initialize" all
> > the images???
> >
> > Great thanks!!
> >
> > On Sun, Dec 11, 2016 at 8:47 PM, Somnath Roy <somnath....@sandisk.com
> > <mailto:somnath....@sandisk.com>> wrote:
> >
> >     Fill up the image with big write (say 1M) first before reading and
> >     you should see sane throughput.
> >
> >
> >
> >     Thanks & Regards
> >
> >     Somnath
> >
> >     *From:*ceph-users [mailto:ceph-users-boun...@lists.ceph.com
> >     <mailto:ceph-users-boun...@lists.ceph.com>] *On Behalf Of *V Plus
> >     *Sent:* Sunday, December 11, 2016 5:44 PM
> >     *To:* ceph-users@lists.ceph.com <mailto:ceph-users@lists.ceph.com>
> >     *Subject:* [ceph-users] Ceph performance is too good
> (impossible..)...
> >
> >
> >
> >     Hi Guys,
> >
> >     we have a ceph cluster with 6 machines (6 OSD per host).
> >
> >     1. I created 2 images in Ceph, and map them to another host A
> >     (*/outside /*the Ceph cluster). On host A, I
> >     got *//dev/rbd0/* and*/ /dev/rbd1/*.
> >
> >     2. I start two fio job to perform READ test on rbd0 and rbd1. (fio
> >     job descriptions can be found below)
> >
> >     */"sudo fio fioA.job -output a.txt & sudo  fio fioB.job -output
> >     b.txt  & wait"/*
> >
> >     3. After the test, in a.txt, we got */bw=1162.7MB/s/*, in b.txt,
> >     we get */bw=3579.6MB/s/*.
> >
> >     The results do NOT make sense because there is only one NIC on
> >     host A, and its limit is 10 Gbps (1.25GB/s).
> >
> >
> >
> >     I suspect it is because of the cache setting.
> >
> >     But I am sure that in file *//etc/ceph/ceph.conf/* on host A,I
> >     already added:
> >
> >     */[client]/*
> >
> >     */rbd cache = false/*
> >
> >
> >
> >     Could anyone give me a hint what is missing? why....
> >
> >     Thank you very much.
> >
> >
> >
> >     *fioA.job:*
> >
> >     /[A]/
> >
> >     /direct=1/
> >
> >     /group_reporting=1/
> >
> >     /unified_rw_reporting=1/
> >
> >     /size=100%/
> >
> >     /time_based=1/
> >
> >     /filename=/dev/rbd0/
> >
> >     /rw=read/
> >
> >     /bs=4MB/
> >
> >     /numjobs=16/
> >
> >     /ramp_time=10/
> >
> >     /runtime=20/
> >
> >
> >
> >     *fioB.job:*
> >
> >     /[B]/
> >
> >     /direct=1/
> >
> >     /group_reporting=1/
> >
> >     /unified_rw_reporting=1/
> >
> >     /size=100%/
> >
> >     /time_based=1/
> >
> >     /filename=/dev/rbd1/
> >
> >     /rw=read/
> >
> >     /bs=4MB/
> >
> >     /numjobs=16/
> >
> >     /ramp_time=10/
> >
> >     /runtime=20/
> >
> >
> >
> >     /Thanks.../
> >
> >     PLEASE NOTE: The information contained in this electronic mail
> >     message is intended only for the use of the designated
> >     recipient(s) named above. If the reader of this message is not the
> >     intended recipient, you are hereby notified that you have received
> >     this message in error and that any review, dissemination,
> >     distribution, or copying of this message is strictly prohibited.
> >     If you have received this communication in error, please notify
> >     the sender by telephone or e-mail (as shown above) immediately and
> >     destroy any and all copies of this message in your possession
> >     (whether hard copies or electronically stored copies).
> >
> >
> >
> >
> > _______________________________________________
> > ceph-users mailing list
> > ceph-users@lists.ceph.com
> > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
>
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to