Hey Mark,

Sorry I missed your message as I'm only subscribed to daily digests.


> Date: Tue, 3 May 2016 09:05:02 -0500
> From: Mark Nelson <mnel...@redhat.com>
> To: ceph-users@lists.ceph.com
> Subject: Re: [ceph-users] Erasure pool performance expectations
> Message-ID: <df3de049-a7f9-7f86-3ed3-47079e401...@redhat.com>
> Content-Type: text/plain; charset=windows-1252; format=flowed
> In addition to what nick said, it's really valuable to watch your cache
> tier write behavior during heavy IO.  One thing I noticed is you said
> you have 2 SSDs for journals and 7 SSDs for data.


I thought the hardware recommendations were 1 journal disk per 3 or 4 data
disks but I think I might have misunderstood it. Looking at my journal
read/writes they seem to be ok though:
https://www.dropbox.com/s/er7bei4idd56g4d/Screenshot%202016-05-06%2009.55.30.png?dl=0

However I started running into a lot of slow requests (made a separate
thread for those: Diagnosing slow requests) and now I'm hoping these could
be related to my journaling setup.


> If they are all of
> the same type, you're likely bottlenecked by the journal SSDs for
> writes, which compounded with the heavy promotions is going to really
> hold you back.
> What you really want:
> 1) (assuming filestore) equal large write throughput between the
> journals and data disks.

How would one achieve that?

>
> 2) promotions to be limited by some reasonable fraction of the cache
> tier and/or network throughput (say 70%).  This is why the
> user-configurable promotion throttles were added in jewel.

Are these already in the docs somewhere?

>
> 3) The cache tier to fill up quickly when empty but change slowly once
> it's full (ie limiting promotions and evictions).  No real way to do
> this yet.
> Mark


Thanks for your thoughts.

Peter
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to