Hi Jens

We also see these messages quite frequently, mainly the "replicating
dir...". Only seen "failed to open ino" a few times so didn't do any real
investigation. Our set up is very similar to yours, 12.2.1, active/standby
MDS and exporting cephfs through KNFS (hoping to replace with Ganesha
soon). Interestingly, the paths reported in "replicating dir" are usually
dirs exported through Samba (generally Windows profile dirs). Samba runs
really well for us and there doesn't seem to be any impact on users. I
expect we wouldn't see these messages if running active/active MDS but I'm
still a bit cautious about implementing that (am I being overly cautious I
wonder?).

Thanks,

On Mon, Nov 27, 2017 at 10:57 AM, Jens-U. Mozdzen <jmozd...@nde.ag> wrote:

> Hi,
>
> Zitat von "Yan, Zheng" <uker...@gmail.com>:
>
>> On Sat, Nov 25, 2017 at 2:27 AM, Jens-U. Mozdzen <jmozd...@nde.ag> wrote:
>>
>>> [...]
>>> In the log of the active MDS, we currently see the following two inodes
>>> reported over and over again, about every 30 seconds:
>>>
>>> --- cut here ---
>>> 2017-11-24 18:24:16.496397 7fa308cf0700  0 mds.0.cache  failed to open
>>> ino
>>> [...]
>>>
>>
>> It's likely caused by NFS export.  MDS reveals this error message if
>> NFS client tries to access a deleted file. The error causes NFS client
>> to return -ESTALE.
>>
>
> thank you for pointing me at this potential cause - as we're still using
> NFS access during that job (old clients without native CephFS support), it
> may be we have some yet unnoticed stale NFS file handles. I'll have a
> closer look, indeed!
>
>
> Regards,
> Jens
>
> _______________________________________________
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to