Hi all,

I have an annoying problem with a specific ceph fs client. We have a file 
server on which we re-export kernel mounts via samba (all mounts with noshare 
option). On one of these re-exports we have recurring problems. Today I caught 
it with

2023-05-10T13:39:50.963685+0200 mds.ceph-23 (mds.1) 1761 : cluster [WRN] 
client.205899841 isn't responding to mclientcaps(revoke), ino 0x20011d3e5cb 
pending pAsLsXsFscr issued pAsLsXsFscr, sent 61.705410 seconds ago

and I wanted to look up what path the inode 0x20011d3e5cb points to. 
Unfortunately, the command

ceph tell "mds.*" dump inode 0x20011d3e5cb

crashes an MDS in a way that it restarts itself, but doesn't seem to come back 
clean (it does not fail over to a stand-by). If I repeat the command above, it 
crashes the MDS again. Execution on other MDS daemons succeeds, for example:

# ceph tell "mds.ceph-24" dump inode 0x20011d3e5cb
2023-05-10T14:14:37.091+0200 7fa47ffff700  0 client.210149523 ms_handle_reset 
on v2:192.168.32.88:6800/3216233914
2023-05-10T14:14:37.124+0200 7fa4857fa700  0 client.210374440 ms_handle_reset 
on v2:192.168.32.88:6800/3216233914
dump inode failed, wrong inode number or the inode is not cached

The caps recall gets the client evicted at some point but it doesn't manage to 
come back clean. On a single ceph fs mount point I see this

# ls /shares/samba/rit-oil
ls: cannot access '/shares/samba/rit-oil': Stale file handle

All other mount points are fine, just this one acts up. A "mount -o remount 
/shares/samba/rit-oil" crashed the entire server and I had to do a cold reboot. 
On reboot I see this message: https://imgur.com/a/bOSLxBb , which only occurs 
on this one file server (we are running a few of those). Does this point to a 
more serious problem, like a file system corruption? Should I try an fs scrub 
on the corresponding path?

Some info about the system:

The file server's kernel version is quite recent, updated two weeks ago:

$ uname -r
4.18.0-486.el8.x86_64
# cat /etc/redhat-release 
CentOS Stream release 8

Our ceph cluster is octopus latest and we use the packages from the octopus el8 
repo on this server.

We have several such shares and they all work fine. It is only on one share 
where we have persistent problems with the mount point hanging or the server 
freezing and crashing.

After working hours I will try a proper fail of the "broken" MDS to see if I 
can execute the dump inode command without it crashing everything.

In the mean time, any hints would be appreciated. I see that we have an 
exceptionally large MDS log for the problematic one. Any hint what to look for 
would be appreciated, it contains a lot from the recovery operations:

# pdsh -w ceph-[08-17,23-24] ls -lh "/var/log/ceph/ceph-mds.ceph-??.log"

ceph-23: -rw-r--r--. 1 ceph ceph 15M May 10 14:28 
/var/log/ceph/ceph-mds.ceph-23.log *** huge ***

ceph-24: -rw-r--r--. 1 ceph ceph 14K May 10 14:28 
/var/log/ceph/ceph-mds.ceph-24.log
ceph-10: -rw-r--r--. 1 ceph ceph 394 May 10 14:02 
/var/log/ceph/ceph-mds.ceph-10.log
ceph-13: -rw-r--r--. 1 ceph ceph 394 May 10 14:02 
/var/log/ceph/ceph-mds.ceph-13.log
ceph-08: -rw-r--r--. 1 ceph ceph 394 May 10 14:02 
/var/log/ceph/ceph-mds.ceph-08.log
ceph-15: -rw-r--r--. 1 ceph ceph 14K May 10 14:28 
/var/log/ceph/ceph-mds.ceph-15.log
ceph-17: -rw-r--r--. 1 ceph ceph 14K May 10 14:28 
/var/log/ceph/ceph-mds.ceph-17.log
ceph-14: -rw-r--r--. 1 ceph ceph 16K May 10 14:28 
/var/log/ceph/ceph-mds.ceph-14.log
ceph-09: -rw-r--r--. 1 ceph ceph 16K May 10 14:28 
/var/log/ceph/ceph-mds.ceph-09.log
ceph-16: -rw-r--r--. 1 ceph ceph 15K May 10 14:28 
/var/log/ceph/ceph-mds.ceph-16.log
ceph-11: -rw-r--r--. 1 ceph ceph 14K May 10 14:28 
/var/log/ceph/ceph-mds.ceph-11.log
ceph-12: -rw-r--r--. 1 ceph ceph 394 May 10 14:02 
/var/log/ceph/ceph-mds.ceph-12.log

Thanks and best regards,
=================
Frank Schilder
AIT Risø Campus
Bygning 109, rum S14
_______________________________________________
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io

Reply via email to