Hi Guys,
we have a ceph cluster with 6 machines (6 OSD per host).
1. I created 2 images in Ceph, and map them to another host A (*outside *the
Ceph cluster). On host A, I got */dev/rbd0* and* /dev/rbd1*.
2. I start two fio job to perform READ test on rbd0 and rbd1. (fio job
descriptions can be foun
t;
> Somnath
>
> *From:* ceph-users [mailto:ceph-users-boun...@lists.ceph.com] *On Behalf
> Of *V Plus
> *Sent:* Sunday, December 11, 2016 5:44 PM
> *To:* ceph-users@lists.ceph.com
> *Subject:* [ceph-users] Ceph performance is too good (impossible..)...
>
>
>
>
e got *bw=1162.7MB/s*, in b.txt, we get
> *bw=3579.6MB/s*.
>
> mostly, due to your kernel buffer of client host
>
>
> -- Original --
> *From: * "Somnath Roy";
> *Date: * Mon, Dec 12, 2016 09:47 AM
> *To: * "V Plus"; "CEPH l
estore backend added advantage of preconditioning rbd will be
> the files in the filesystem will be created beforehand.
>
>
>
> Thanks & Regards
>
> Somnath
>
>
>
> *From:* V Plus [mailto:v.plussh...@gmail.com]
> *Sent:* Sunday, December 11, 2016 6:01 PM
>
d the writes in the filebuffer
> (due this the latency should be very small).
>
> Udo
>
> On 12.12.2016 03:00, V Plus wrote:
> > Thanks Somnath!
> > As you recommended, I executed:
> > dd if=/dev/zero bs=1M count=4096 of=/dev/rbd0
> > dd if=/dev/zero bs=1M count=4
solve this issue is to write the
image before read testas suggested
I have no clue why rbd engine does not work...
On Mon, Dec 12, 2016 at 4:23 PM, Will.Boege wrote:
> Try adding --ioengine=libaio
>
>
>
> *From: *V Plus
> *Date: *Monday, December 12, 2016 at 2:40 PM
&