christian, thanks for your reply.

2017-06-02 11:39 GMT+08:00 Christian Balzer <ch...@gol.com>:

> On Fri, 2 Jun 2017 10:30:46 +0800 jiajia zhong wrote:
>
> > hi guys:
> >
> > Our ceph cluster is working with tier cache.
> If so, then I suppose you read all the discussions here as well and not
> only the somewhat lacking documentation?
>
> > I am running "rados -p data_cache cache-try-flush-evict-all" to evict all
> > the objects.
> Why?
> And why all of it?


we found that when the threshold(flush/evict) was triggered, the
performance would make us a bit upset :), so I wish to flush/evict the tier
in a spare time,eg, middle night,In this scenario,the tier could not pay
any focus on flush/evict while the great w/r operations on cephfs which we
are using.

>
> > But It a bit slow
> >
> Define slow, but it has to do a LOT of work and housekeeping to do this,
> so unless your cluster is very fast (probably not, or you wouldn't
> want/need a cache tier) and idle, that's the way it is.
>
> > 1. Is there any way to speed up the evicting?
> >
> Not really, see above.
>
> > 2. Is evicting triggered by itself good enough for cluster ?
> >
> See above, WHY are you manually flushing/evicting?
>
explained above.


> Are you aware that flushing is the part that's very I/O intensive, while
> evicting is a very low cost/impact operation?
>
not very sure, my instinct believed those.


> In normal production, the various parameters that control this will do
> fine, if properly configured of course.
>
> > 3. Does the flushing and evicting slow down the whole cluster?
> >
> Of course, as any good sysadmin with the correct tools (atop, iostat,
> etc, graphing Ceph performance values with Grafana/Graphite) will be able
> to see instantly.

actually, we are using graphite,  but I could not see that instantly, lol
:(, I could only got the threshold triggered by calculating after happening.

btw, we have cephfs to store a huge number of small files, (64T , about
100K per file),


>
>
> Christian
> --
> Christian Balzer        Network/Systems Engineer
> ch...@gol.com           Rakuten Communications
>
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to