[ceph-users] Re: Clients failing to respond to capability release

2023-10-12 Thread Tim Bishop
Hi Ivan, I don't think we're necessarily seeing the same issue. Mine didn't seem to be related to OSDs, and in fact I could unblock it by killing the backup job on our backup server and unmounting the filesystem. This would then release all other stuck ops on the MDS. I've been waiting to follow-

[ceph-users] Re: Clients failing to respond to capability release

2023-10-12 Thread Ivan Clayson
Hi Tim, We've been seeing something that may be similar to you with concurrent MDS_CLIENT_LATE_RELEASE and MDS_SLOW_REQUEST warning messages as well as frequently MDS_CLIENT_RECALL and MDS_SLOW_METADATA_IO warnings from the same MDS referring to the same client. We are using 1 MDS for our non

[ceph-users] Re: Clients failing to respond to capability release

2023-10-02 Thread E Taka
Same problem here with Ceph 17.2.6 on Ubuntu 22.04 and Clients Debian 11, Kernel 6.0.12-1~bpo11+1. We are still looking for a solution. At the time being we let restart the Orchestrator MDS daemons by removig/adding labels to the servers. We use multiple MDS and have many CPU cores and memory. The

[ceph-users] Re: Clients failing to respond to capability release

2023-09-20 Thread Tim Bishop
Hi Stefan, On Wed, Sep 20, 2023 at 11:00:12AM +0200, Stefan Kooman wrote: > On 19-09-2023 13:35, Tim Bishop wrote: > > The Ceph cluster is running Pacific 16.2.13 on Ubuntu 20.04. Almost all > > clients are working fine, with the exception of our backup server. This > > is using the kernel CephFS