We have SSD journals, backend disks are actually on SSD-fronted bcache
devices in writeback mode. The client VMs have rbd cache enabled too...

-Simon


On Fri, Oct 31, 2014 at 4:07 PM, Nick Fisk <[email protected]> wrote:

> Hmmm, it sounds like you are just saturating the spindles to the point
> that latency starts to climb to unacceptable levels. The problem being that
> no matter how much tuning you apply, at some point the writes will have to
> start being put down to the disk and at that point performance will suffer.
>
>
>
> Do your OSD’s have SSD journals?  In storage, normally adding some sort of
> writeback cache (in Ceph’s case Journals) help to lessen the impact of
> writes by asorbing bursts of writes and by coalescing writes into a more
> sequential pattern to the underlying disks.
>
>
>
> *From:* ceph-users [mailto:[email protected]] *On Behalf
> Of *Xu (Simon) Chen
> *Sent:* 31 October 2014 19:51
> *To:* Nick Fisk
> *Cc:* [email protected]
> *Subject:* Re: [ceph-users] prioritizing reads over writes
>
>
>
> I am already using deadline scheduler, with the default parameters:
>
> read_expire=500
>
> write_expire=5000
>
> writes_starved=2
>
> front_merges=1
>
> fifo_batch=16
>
>
>
> I remember tuning them before, didn't make a great difference.
>
>
>
> -Simon
>
>
>
> On Fri, Oct 31, 2014 at 3:43 PM, Nick Fisk <[email protected]> wrote:
>
> Hi Simon,
>
>
>
> Have you tried using the Deadline scheduler on the Linux nodes? The
> deadline scheduler prioritises reads over writes. I believe it tries to
> service all reads within 500ms whilst writes can be delayed up to 5s.
>
>
>
> I don’t the exact effect Ceph will have over the top of this, but this
> would be the first thing I would try.
>
>
>
> Nick
>
>
>
> *From:* ceph-users [mailto:[email protected]] *On Behalf
> Of *Xu (Simon) Chen
> *Sent:* 31 October 2014 19:37
> *To:* [email protected]
> *Subject:* [ceph-users] prioritizing reads over writes
>
>
>
> Hi all,
>
>
>
> My workload is mostly writes, but when the writes reach a certain
> throughput (iops wise not much higher) the read throughput would tank. This
> seems to be impacting my VMs' responsiveness overall. Reads would recover
> after write throughput drops.
>
>
>
> Is there any way to prioritize read over write, or at least guarantee a
> certain level of aggregated read throughput in a cluster?
>
>
>
> Thanks.
>
> -Simon
>
>
> [image: Image removed by sender.]
>
>
>
>
> _______________________________________________
> ceph-users mailing list
> [email protected]
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
>
_______________________________________________
ceph-users mailing list
[email protected]
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to