I have no experience with XFS, but wouldn't expect poor behaviour with it.
I use ZFS myself and know that it would combine writes, but btrfs might be
an option.

Do you know what block size was used to create the XFS filesystem? It looks
like 4k is the default (reasonable) with a max of 64k. Perhaps a larger
block size will give better performance for your particular use case. (I
use a 1M block size with ZFS.)
http://xfs.org/docs/xfsdocs-xml-dev/XFS_User_Guide/tmp/en-US/html/ch04s02.html


On Tue, Nov 29, 2016 at 10:23 AM Thomas Bennett <tho...@ska.ac.za> wrote:

> Hi Kate,
>
> Thanks for your reply. We currently use xfs as created by ceph-deploy.
>
> What would you recommend we try?
>
> Kind regards,
> Tom
>
>
> On Tue, Nov 29, 2016 at 11:14 AM, Kate Ward <kate.w...@forestent.com>
> wrote:
>
> What filesystem do you use on the OSD? Have you considered a different
> filesystem that is better at combining requests before they get to the
> drive?
>
> k8
>
> On Tue, Nov 29, 2016 at 9:52 AM Thomas Bennett <tho...@ska.ac.za> wrote:
>
> Hi,
>
> We have a use case where we are reading 128MB objects off spinning disks.
>
> We've benchmarked a number of different hard drive and have noticed that
> for a particular hard drive, we're experiencing slow reads by comparison.
>
> This occurs when we have multiple readers (even just 2) reading objects
> off the OSD.
>
> We've recreated the effect using iozone and have noticed that once the
> record size drops to 4k, the hard drive miss behaves.
>
> Is there a setting on Ceph that we can change to fix the minimum read size
> when the ceph-osd daemon reads the object of the hard drives, to see if we
> can overcome the overall slow read rate.
>
> Cheers,
> Tom
> _______________________________________________
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
>
>
>
> --
> Thomas Bennett
>
> SKA South Africa
> Science Processing Team
>
> Office: +27 21 5067341 <+27%2021%20506%207341>
> Mobile: +27 79 5237105 <+27%2079%20523%207105>
>
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to