rbd engine with fsync=1 seems stuck.
Jobs: 1 (f=1): [w(1)] [0.0% done] [0KB/0KB/0KB /s] [0/0/0 iops] [eta
1244d:10h:39m:18s]

But fio using /dev/rbd0 sync=1 direct=1 ioengine=libaio iodepth=64, get
very high iops ~35K, similar to direct wirte.

I'm confused with that result, IMHO, ceph could just ignore the sync cache
command since it always use sync write to journal, right?

Why we get so bad sync iops, how ceph handle it?
Very appreciated to get your reply!

2016-02-25 22:44 GMT+08:00 Jason Dillaman <dilla...@redhat.com>:

> > 35K IOPS with ioengine=rbd sounds like the "sync=1" option doesn't
> actually
> > work. Or it's not touching the same object (but I wonder whether write
> > ordering is preserved at that rate?).
>
> The fio rbd engine does not support "sync=1"; however, it should support
> "fsync=1" to accomplish roughly the same effect.
>
> Jason
>
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to