On 11/23/23 11:25, zxcs wrote:
Thanks a ton, Xiubo!

it not disappear.

even we umount the ceph directory on these two old os node.

after dump ops flight , we can see some request, and the earliest complain “failed 
to authpin, subtree is being exported"

And how to avoid this, would you please help to shed some light here?

Okay, as Frank mentioned you can try to disable the balancer by pining the directories. As I remembered the balancer is buggy.

And also you can raise one ceph tracker and provide the debug logs if you have.

Thanks

- Xiubo


Thanks,
xz


2023年11月22日 19:44,Xiubo Li <xiu...@redhat.com> 写道:


On 11/22/23 16:02, zxcs wrote:
HI, Experts,

we are using cephfs with  16.2.* with multi active mds, and recently, we have 
two nodes mount with ceph-fuse due to the old os system.

and  one nodes run a python script with `glob.glob(path)`, and another client 
doing `cp` operation on the same path.

then we see some log about `mds slow request`, and logs complain “failed to authpin, 
subtree is being exported"

then need to restart mds,


our question is, does there any dead lock?  how can we avoid this and how to 
fix it without restart mds(it will influence other users) ?
BTW, won't the slow requests disappear themself later ?

It looks like the exporting is slow or there too many exports are going on.

Thanks

- Xiubo

Thanks a ton!


xz
_______________________________________________
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
_______________________________________________
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io
_______________________________________________
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io

Reply via email to