Dear Xiubo,

I managed to collect logs and uploaded them to:

ceph-post-file: 3d4d1419-a11e-4937-b0b1-bd99234d4e57

By the way, if you run the test with the conda.tgz at the link location, be 
careful: it contains a .bashrc file to activate the conda environment. Un-tar 
it only in a dedicated location. Unfortunately, this is default with a conda 
installation. I will remove this file from the archive tomorrow. Well, I hope 
the logs contain what you are looking for.

I enabled dmesg debug logs for both, the kclient and nfsd. However, nfsd seems 
not to log anything, I see only ceph messages. I interrupted the tar command as 
soon as the error showed up for a number of times. There is indeed a change in 
log messages at the end, indicating an issue with ceph client caps under high 
load.

It looks as if instead of waiting for an MDS response the kclient completes a 
request prematurely with insufficient caps. I really hope it is possible to fix 
that.

I will keep the files on the system in case you need FS info for specific 
inodes.

Thanks and best regards,
=================
Frank Schilder
AIT Risø Campus
Bygning 109, rum S14

________________________________________
From: Frank Schilder <fr...@dtu.dk>
Sent: Monday, March 27, 2023 5:22 PM
To: Xiubo Li; Gregory Farnum
Cc: ceph-users@ceph.io
Subject: Re: [ceph-users] Re: ln: failed to create hard link 'file name': 
Read-only file system

> Sorry for late.
No worries.

> The ceph qa teuthology test cases have already one similar test, which
> will untar a kernel tarball, but never seen this yet.
>
> I will try this again tomorrow without the NFS client.

Great. In case you would like to use the archive I sent you a link for, please 
keep it confidential. It contains files not for publication.

I will collect the log information you asked for.

Best regards,
=================
Frank Schilder
AIT Risø Campus
Bygning 109, rum S14
_______________________________________________
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io

Reply via email to