On Fri, Jun 12, 2015 at 11:59 AM, Lincoln Bryant wrote:
> Thanks John, Greg.
>
> If I understand this correctly, then, doing this:
> rados -p hotpool cache-flush-evict-all
> should start appropriately deleting objects from the cache pool. I just
> started one up, and that seems to be work
Thanks John, Greg.
If I understand this correctly, then, doing this:
rados -p hotpool cache-flush-evict-all
should start appropriately deleting objects from the cache pool. I just started
one up, and that seems to be working.
Otherwise, the cache's confgured timeouts/limits should get th
On Fri, Jun 12, 2015 at 11:07 AM, John Spray wrote:
>
> Just had a go at reproducing this, and yeah, the behaviour is weird. Our
> automated testing for cephfs doesn't include any cache tiering, so this is a
> useful exercise!
>
> With a writeback overlay cache tier pool on an EC pool, I write a
Just had a go at reproducing this, and yeah, the behaviour is weird.
Our automated testing for cephfs doesn't include any cache tiering, so
this is a useful exercise!
With a writeback overlay cache tier pool on an EC pool, I write a bunch
of files, then do a rados cache-flush-evict-all, the
Greetings experts,
I've got a test set up with CephFS configured to use an erasure coded pool +
cache tier on 0.94.2.
I have been writing lots of data to fill the cache to observe the behavior and
performance when it starts evicting objects to the erasure-coded pool.
The thing I have noticed