On 05/27/2015 10:30 PM, Gregory Farnum wrote:
On Wed, May 27, 2015 at 6:49 AM, Kenneth Waegeman
<kenneth.waege...@ugent.be> wrote:
We are also running a full backup sync to cephfs, using multiple distributed
rsync streams (with zkrsync), and also ran in this issue today on Hammer
0.94.1 .
After setting the beacon higer, and eventually clearing the journal, it
stabilized again.
We were using ceph-fuse to mount the cephfs, not the ceph kernel client.
What's your MDS cache size set to?
I did set it to 1000000 before (we have 64G of ram for the mds) trying
to get rid of the 'Client .. failing to respond to cache pressure' messages
Did you have any warnings in the
ceph log about clients not releasing caps?
Unfortunately lost the logs of before it happened.. But nothing in the
new logs about that, I will follow this up
I think you could hit this in ceph-fuse as well on hammer, although we
just merged in a fix: https://github.com/ceph/ceph/pull/4653
-Greg
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com