>> >> I think the timing should work that we'll be deploying with Firefly and
>> >> so
>> >> have Ceph cache pool tiering as an option, but I'm also evaluating
>> >> Bcache
>> >> versus Tier to act as node-local block cache device. Does anybody have
>> >> real
>> >> or anecdotal evidence about which approach has better performance?
>> > New idea that is dependent on failure behaviour of the cache tier...
>>
>> The problem with this type of configuration is it ties a VM to a
>> specific hypervisor, in theory it should be faster because you don't
>> have network latency from round trips to the cache tier, resulting in
>> higher iops. Large sequential workloads may achieve higher throughput
>> by parallelizing across many OSDs in a cache tier, whereas local flash
>> would be limited to single device throughput.
>
> Ah, I was ambiguous. When I said node-local I meant OSD-local. So I'm really
> looking at:
> 2-copy write-back object ssd cache-pool
> versus
> OSD write-back ssd block-cache
> versus
> 1-copy write-around object cache-pool & ssd journal

Ceph cache pools allow you to scale the size of the cache pool
independent of the underlying storage and avoids constraints about
disk:ssd ratios (for flashcache, bcache, etc). Local block caches
should have lower latency than a cache tier for a cache miss, due to
the extra hop(s) across the network. I would lean towards using Ceph's
cache tiers for the scaling independence.

> This is undoubtedly true for a write-back cache-tier. But in the scenario
> I'm suggesting, a write-around cache, that needn't be bad news - if a
> cache-tier OSD is lost the cache simply just got smaller and some cached
> objects were unceremoniously flushed. The next read on those objects should
> just miss and bring them into the now smaller cache.
>
> The thing I'm trying to avoid with the above is double read-caching of
> objects (so as to get more aggregate read cache). I assume the standard
> wisdom with write-back cache-tiering is that the backing data pool shouldn't
> bother with ssd journals?

Currently, all cache tiers need to be durable - regardless of cache
mode. As such, cache tiers should be erasure coded or N+1 replicated
(I'd recommend N+2 or 3x replica). Ceph could potentially do what you
described in the future, it just doesn't yet.

-- 

Kyle
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to