Cephfs does have repair tools but I wouldn't jump the gun, your metadata
pool is probably fine. Unless you're getting health errors or seeing errors
in your MDS log?

Are you exporting a fuse or kernel mount with Ganesha (i.e using the vfs
FSAL) or using the Ceph FSAL? Have you tried any tests directly on a CephFS
mount (taking Ganesha out of the equation)?


On Sat, Sep 30, 2017 at 11:09 PM, Marc Roos <m.r...@f1-outsourcing.eu>
wrote:

>
>
> I have on luminous 12.2.1 on a osd node nfs-ganesha 2.5.2 (from ceph
> download) running. And when I rsync on a vm that has the nfs mounted, I
> get stalls.
>
> I thought it was related to the amount of files of rsyncing the centos7
> distro. But when I tried to rsync just one file it also stalled. It
> looks like it could not create the update of the 'CentOS_BuildTag' file.
>
> Could this be a problem in the meta data pool of cephfs? Does this sound
> familiar? Is there something like an fsck for cephfs?
>
> drwxr-xr-x 1 500 500     7 Jan 24  2016 ..
> -rw-r--r-- 1 500 500    14 Dec  5  2016 CentOS_BuildTag
> -rw-r--r-- 1 500 500    29 Dec  5  2016 .discinfo
> -rw-r--r-- 1 500 500   946 Jan 12  2017 .treeinfo
> drwxr-xr-x 1 500 500     1 Sep  5 15:36 LiveOS
> drwxr-xr-x 1 500 500     1 Sep  5 15:36 EFI
> drwxr-xr-x 1 500 500     3 Sep  5 15:36 images
> drwxrwxr-x 1 500 500    10 Sep  5 23:57 repodata
> drwxrwxr-x 1 500 500  9591 Sep 19 20:33 Packages
> drwxr-xr-x 1 500 500     9 Sep 19 20:33 isolinux
> -rw------- 1 500 500     0 Sep 30 23:49 .CentOS_BuildTag.PKZC1W
> -rw------- 1 500 500     0 Sep 30 23:52 .CentOS_BuildTag.gM1C1W
> drwxr-xr-x 1 500 500    15 Sep 30 23:52 .
>
>
>
>
> _______________________________________________
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to