Update: I had to wipe my CephFS, because after I increased the beacon grace period on the last attempt, I couldn't get the MDSs to rejoin anymore at all without running out of memory on the machine. I tried wiping all sessions and the journal, but it didn't work. In the end all I achieved was that the daemons crashed right after starting with some assertion error. So now I have a fresh CephFS and will try to copy the data from scratch.

On 24.07.19 15:36, Feng Zhang wrote:
Does Ceph-fuse mount also has the same issue?

That's hard to say. I started with the kernel module and saw the same behaviour again. I got to 930k inodes after only two minutes and stopped there. Since then, this number has not gone back down, not even after I disconnected all clients. I retried the same with ceph-fuse and the number did not increase any further (although it did not decrease either). When I unmounted the share and remounted it with the kernel module again, the number rose to 948k almost immediately.

So it looks like the problem only occurs with the kernel module, but maybe ceph-fuse is just too slow to tell. In fact, it is a magnitude slower. I only get 1.3k reqs/s compared to the 20k req/s with the kernel module, which is not practical at all.

_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to