I'm quite clear now.
> this is my test setup - that's why I'm trying to break it and fix it
(best way to learn)
Thanks for your feedback!!
Kinjo
On Sun, Jul 5, 2015 at 8:51 PM, Jacek Jarosiewicz <
jjarosiew...@supermedia.pl> wrote:
> Well, the docs say that when your osds get full you should
Well, the docs say that when your osds get full you should add another
osd - and the cluster should redistribute data by it self:
http://ceph.com/docs/master/rados/troubleshooting/troubleshooting-osd/#no-free-drive-space
this is my test setup - that's why I'm trying to break it and fix it
(bes
That's good!
So was the root cause is because the osd was full? What's your thought
about that?
Was there any reason to delete any files?
Kinjo
On Sun, Jul 5, 2015 at 6:51 PM, Jacek Jarosiewicz <
jjarosiew...@supermedia.pl> wrote:
> ok, I got it working...
>
> first i manually deleted some fi
ok, I got it working...
first i manually deleted some files from the full osd, set the flag
noout and restarted the osd daemon.
then i waited a while for the cluster to backfill pgs, and after that
the rados -p cache cache-try-flush-evict-all command went OK.
I'm wondering though, because t
Hi,
I'm currently testing cache tier on ceph 0.94.2 - I've set up an erasure
coded pool with a cache pool on SSD drives, the cache mode is set to
writeback. I tried to fill the cache and see how it will flush objects
but the cache pool is full and I can't flush-evict any objects.. and
there a