Hi all,

on an NFS re-export of a ceph-fs (kernel client) I observe a very strange 
error. I'm un-taring a larger package (1.2G) and after some time I get these 
errors:

ln: failed to create hard link 'file name': Read-only file system

The strange thing is that this seems only temporary. When I used "ln src dst" 
for manual testing, the command failed as above. However, after that I tried 
"ln -v src dst" and this command created the hard link with exactly the same 
path arguments. During the period when the error occurs, I can't see any FS in 
read-only mode, neither on the NFS client nor the NFS server. Funny thing is 
that file creation and write still works, its only the hard-link creation that 
fails.

For details, the set-up is:

file-server: mount ceph-fs at /shares/path, export /shares/path as nfs4 to 
other server
other server: mount /shares/path as NFS

More precisely, on the file-server:

fstab: MON-IPs:/shares/folder /shares/nfs/folder ceph 
defaults,noshare,name=NAME,secretfile=sec.file,mds_namespace=FS-NAME,_netdev 0 0
exports: /shares/nfs/folder 
-no_root_squash,rw,async,mountpoint,no_subtree_check DEST-IP

On the host at DEST-IP:

fstab: FILE-SERVER-IP:/shares/nfs/folder /mnt/folder nfs defaults,_netdev 0 0

Both, the file server and the client server are virtual machines. The file 
server is on Centos 8 stream (4.18.0-338.el8.x86_64) and the client machine is 
on AlmaLinux 8 (4.18.0-425.13.1.el8_7.x86_64).

When I change the NFS export from "async" to "sync" everything works. However, 
that's a rather bad workaround and not a solution. Although this looks like an 
NFS issue, I'm afraid it is a problem with hard links and ceph-fs. It looks 
like a race with scheduling and executing operations on the ceph-fs kernel 
mount.

Has anyone seen something like that?

Thanks and best regards,
=================
Frank Schilder
AIT Risø Campus
Bygning 109, rum S14
_______________________________________________
ceph-users mailing list -- ceph-users@ceph.io
To unsubscribe send an email to ceph-users-le...@ceph.io

Reply via email to