----- Message from Sage Weil <sw...@redhat.com> ---------
Date: Thu, 31 Jul 2014 08:51:34 -0700 (PDT)
From: Sage Weil <sw...@redhat.com>
Subject: Re: [ceph-users] cache pool osds crashing when data is
evicting to underlying storage pool
To: Kenneth Waegeman <kenneth.waege...@ugent.be>
Cc: ceph-users <ceph-users@lists.ceph.com>
Hi Kenneth,
On Thu, 31 Jul 2014, Kenneth Waegeman wrote:
Hi all,
We have a erasure coded pool 'ecdata' and a replicated pool 'cache'
acting as
writeback cache upon it.
When running 'rados -p ecdata bench 1000 write', it starts filling up the
'cache' pool as expected.
I want to see what happens when it starts evicting, therefore I've done:
ceph osd pool set cache target_max_bytes $((200*1024*1024*1024))
When it start to evict the objects to 'ecdata', the cache osds are all
crashing. I logged an issue : http://tracker.ceph.com/issues/8982
I enabled the cache with this commands:
ceph osd pool create cache 1024 1024
ceph osd erasure-code-profile set profile11 k=8 m=3
ruleset-failure-domain=osd
ceph osd pool create ecdata 128 128 erasure profile11
ceph osd tier add ecdata cache
ceph osd tier cache-mode cache writeback
ceph osd tier set-overlay ecdata cache
Is there something else that I should configure?
I think you just need to enable the hit_set tracking. It's
obviously not supposed to crash when you don't, though; I'll fix
that up shortly.
Thanks, it seems to work now! I saw the values of hit_set_count,.. are
set to zero. What does this mean? Is this the default value or does it
actually mean '0' (and then they should be set always?)
By the way, although we love the testing on the development releases, you
probably want to be using firefly if this is destined for production.
We are still in testing phase, so we can keep working with the
development releases :-)
Thanks!
sage
----- End message from Sage Weil <sw...@redhat.com> -----
--
Met vriendelijke groeten,
Kenneth Waegeman
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com