Re: [ceph-users] Guest sync write iops so poor.

2016-02-26 Thread Nick Fisk
Thanks Jan, that is an excellent explanation. From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of Jan Schermer Sent: 26 February 2016 10:07 To: Huan Zhang Cc: josh durgin ; Nick Fisk ; ceph-users Subject: Re: [ceph-users] Guest sync write iops so poor. O_DIRECT is

Re: [ceph-users] Guest sync write iops so poor.

2016-02-26 Thread Jan Schermer
ph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of >> Huan Zhang >> Sent: 26 February 2016 09:30 >> To: Nick Fisk >> Cc: josh durgin ; ceph-users > us...@ceph.com> >> Subject: Re: [ceph-users] Guest sync write iops so poor. >> >> Hi N

Re: [ceph-users] Guest sync write iops so poor.

2016-02-26 Thread Jan Schermer
6:59 GMT+08:00 Nick Fisk <mailto:n...@fisk.me.uk>>: > > -Original Message- > > From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com > > <mailto:ceph-users-boun...@lists.ceph.com>] On Behalf Of > > Huan Zhang > > Sent: 26 February 2016 06:50 &g

Re: [ceph-users] Guest sync write iops so poor.

2016-02-26 Thread Nick Fisk
s-boun...@lists.ceph.com] On Behalf Of > Huan Zhang > Sent: 26 February 2016 09:30 > To: Nick Fisk > Cc: josh durgin ; ceph-users us...@ceph.com> > Subject: Re: [ceph-users] Guest sync write iops so poor. > > Hi Nick, > DB's IO pattern depends on config, mysql for example. >

Re: [ceph-users] Guest sync write iops so poor.

2016-02-26 Thread Huan Zhang
te iops with fio. > > > > > 2016-02-26 16:59 GMT+08:00 Nick Fisk : > >> > -Original Message- >> > From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf >> Of >> > Huan Zhang >> > Sent: 26 February 2016 06:50 >>

Re: [ceph-users] Guest sync write iops so poor.

2016-02-26 Thread Huan Zhang
-- > > From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of > > Huan Zhang > > Sent: 26 February 2016 06:50 > > To: Jason Dillaman > > Cc: josh durgin ; Nick Fisk ; > > ceph-users > > Subject: Re: [ceph-users] Guest sync write iops

Re: [ceph-users] Guest sync write iops so poor.

2016-02-26 Thread Nick Fisk
> -Original Message- > From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of > Huan Zhang > Sent: 26 February 2016 06:50 > To: Jason Dillaman > Cc: josh durgin ; Nick Fisk ; > ceph-users > Subject: Re: [ceph-users] Guest sync write iops so po

Re: [ceph-users] Guest sync write iops so poor.

2016-02-25 Thread Huan Zhang
Since fio /dev/rbd0 sync=1 works well, it doesn't matter with ceph server, just related to librbd (rbd_aio_flush) implement? 2016-02-26 14:50 GMT+08:00 Huan Zhang : > rbd engine with fsync=1 seems stuck. > Jobs: 1 (f=1): [w(1)] [0.0% done] [0KB/0KB/0KB /s] [0/0/0 iops] [eta > 1244d:10h:39m:18s] >

Re: [ceph-users] Guest sync write iops so poor.

2016-02-25 Thread Huan Zhang
rbd engine with fsync=1 seems stuck. Jobs: 1 (f=1): [w(1)] [0.0% done] [0KB/0KB/0KB /s] [0/0/0 iops] [eta 1244d:10h:39m:18s] But fio using /dev/rbd0 sync=1 direct=1 ioengine=libaio iodepth=64, get very high iops ~35K, similar to direct wirte. I'm confused with that result, IMHO, ceph could just i

Re: [ceph-users] Guest sync write iops so poor.

2016-02-25 Thread Jason Dillaman
> 35K IOPS with ioengine=rbd sounds like the "sync=1" option doesn't actually > work. Or it's not touching the same object (but I wonder whether write > ordering is preserved at that rate?). The fio rbd engine does not support "sync=1"; however, it should support "fsync=1" to accomplish roughly t

Re: [ceph-users] Guest sync write iops so poor.

2016-02-25 Thread nick
11:11 >> To: josh.dur...@inktank.com >> Cc: ceph-users >> Subject: [ceph-users] Guest sync write iops so poor. >> >> Hi, >>   We test sync iops with fio sync=1 for database workloads in VM, >> the backend is librbd and ceph (all SSD setup). >>   The

Re: [ceph-users] Guest sync write iops so poor.

2016-02-25 Thread Jan Schermer
> On 25 Feb 2016, at 14:39, Nick Fisk wrote: > > > >> -Original Message- >> From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of >> Huan Zhang >> Sent: 25 February 2016 11:11 >> To: josh.dur...@inktank.com >> Cc:

Re: [ceph-users] Guest sync write iops so poor.

2016-02-25 Thread Nick Fisk
> -Original Message- > From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of > Huan Zhang > Sent: 25 February 2016 11:11 > To: josh.dur...@inktank.com > Cc: ceph-users > Subject: [ceph-users] Guest sync write iops so poor. > > Hi, >W

[ceph-users] Guest sync write iops so poor.

2016-02-25 Thread Huan Zhang
Hi, We test sync iops with fio sync=1 for database workloads in VM, the backend is librbd and ceph (all SSD setup). The result is sad to me. we only get ~400 IOPS sync randwrite with iodepth=1 to iodepth=32. But test in physical machine with fio ioengine=rbd sync=1, we can reache ~35K IO