On Mon, Dec 21, 2015 at 11:46 PM, Don Waterloo <don.water...@gmail.com> wrote:
> On 20 December 2015 at 22:47, Yan, Zheng <uker...@gmail.com> wrote:
>>
>> >> ---------------------------------------------------------------
>> >>
>>
>>
>> fio tests AIO performance in this case. cephfs does not handle AIO
>> properly, AIO is actually SYNC IO. that's why cephfs is so slow in
>> this case.
>>
>> Regards
>> Yan, Zheng
>>
>
> OK, so i changed fio engine to 'sync' for the comparison of a single
> underlying osd vs the cephfs.
>
> the cephfs w/ sync is ~ 115iops / ~500KB/s.

This is normal because you were doing single thread sync IO. If
round-trip time for each OSD request is about 10ms (network latency),
you can only have about 100 IOPS.

> the underlying osd storage w/ sync is 6500 iops/270MB/s.
>
> I also don't think this explains why cephfs-fuse faster (~5x faster, but
> still ~100x slower than it should be).
>

Direct IO is used in your test case. ceph-fuse does not handle
direct-IO correctly, user space cache is used in direct-IO case.

Regards
Yan, Zheng


> If i get rid of fio and use tried-and-true dd:
> time dd if=/dev/zero of=rw.data bs=256k count=10000
> on the underlying osd storage shows 426MB/s.
> on the cephfs, it gets 694MB/s.
>
> hmm.
>
> so i guess my 'lag' issue of slow requests is unrelated and is my real
> problem.
>
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to