On 27 Nov 2017 1:06 p.m., "Jens-U. Mozdzen" <jmozd...@nde.ag> wrote:

Hi David,

Zitat von David C <dcsysengin...@gmail.com>:

Hi Jens
>
> We also see these messages quite frequently, mainly the "replicating
> dir...". Only seen "failed to open ino" a few times so didn't do any real
> investigation. Our set up is very similar to yours, 12.2.1, active/standby
> MDS and exporting cephfs through KNFS (hoping to replace with Ganesha
> soon).
>

been there, done that - using Ganesha more than doubled the run-time of our
jobs, while with knfsd, the run-time is about the same for CephFS-based and
"local disk"-based files. But YMMV, so if you see speeds with Ganesha that
are similar to knfsd, please report back with details...


I'd be interested to know if you tested Ganesha over a cephfs kernel mount
(ie using the VFS fsal) or if you used the Ceph fsal. Also the server and
client versions you tested.

Prior to Luminous, Ganesha writes were terrible due to a bug with fsync
calls in the mds code. The fix went into the mds and client code. If you're
doing Ganesha over the top of the kernel mount you'll need a pretty recent
kernel to see the write improvements.

>From my limited Ganesha testing so far, reads are better when exporting the
kernel mount, writes are much better with the Ceph fsal. But that's
expected for me as I'm using the CentOS kernel. I was hoping the
aforementioned fix would make it into the rhel 7.4 kernel but doesn't look
like it has.

I currently use async on my nfs exports as writes are really poor
otherwise. I'm comfortable with the risks that entails.






Interestingly, the paths reported in "replicating dir" are usually
> dirs exported through Samba (generally Windows profile dirs). Samba runs
> really well for us and there doesn't seem to be any impact on users. I
> expect we wouldn't see these messages if running active/active MDS but I'm
> still a bit cautious about implementing that (am I being overly cautious I
> wonder?).
>

>From what I can see, it would have to be A/A/P, since MDS demands at least
one stand-by.


That's news to me. Is it possible you still had standby config in your
ceph.conf?


Regards,
Jens
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to