Hello,

On Tue, 22 Mar 2016 12:28:22 -0400 Maran wrote:

> Hey guys,
> 
> I'm trying to wrap my head about the Ceph Cache Tiering to discover if
> what I want is achievable.
> 
> My cluster exists of 6 OSD nodes with normal HDD and one cache tier of
> SSDs.
> 
One cache tier being what, one node? 
That's a SPOF and disaster waiting to happen.

Also the usual (so we're not comparing apples with oranges), as in what
types of SSDs, OS, Ceph versions, network, everything.

> What I would love is that Ceph flushes and evicts data as soon as a file
> hasn't been requested by a client in a certain timeframe, even if there
> is enough space to keep it there longer. The reason I would prefer this
> is that I have a feeling overall performance suffers if new writes are
> coming into the cache tier while at the same time flush and evicts are
> happening.
> 
You will want to read my recent thread titled 
"Cache tier operation clarifications"

where I asked for something along those lines.

The best thing you could do right now and which I'm planning to do if
flushing (evictions should be very light impact wise) turns out to be
detrimental performance wise is to lower the ratios at low utilization
times and raise them again for peak times. 
Again, read the thread above.

> It also seems that for some reason my cache node is not using the
> cluster network as much as I expected. Where all HDD nodes are using the
> cluster network to the fullest (multiple TBs) my SSD node only used 1GB
> on the cluster network. Is there anyway to diagnose this problem or is
> this intended behaviour? I expected the flushes to happen over the
> cluster network.
>
That is to be expected, as the cache tier is a client from the Ceph
perspective. 
 
Unfortunate, but AFAIK there are no plans to change this behavior.

> I appreciate any pointers you might have for me.
> 
You will also want to definitely read the recent thread titled 
"data corruption with hammer".

Christian
-- 
Christian Balzer        Network/Systems Engineer                
ch...@gol.com           Global OnLine Japan/Rakuten Communications
http://www.gol.com/
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to