xiaoxi chen writes:
>
> Hmm, I asked in the ML some days before,:) likely you hit the kernel bug
which fixed by commit 5e804ac482 "ceph: don't invalidate page cache when
inode is no longer used”. This fix is in 4.4 but not in 4.2. I haven't got a
chance to play with 4.4 , it would be great i
Sorry, forgot.
Kernel!
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
Hey John,
ceph version 10.2.2 (45107e21c568dd033c2f0a3107dec8f0b0e58374)
4.2.0-36-generic
Thanks!
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
Hello guys,
>From time to time I have MDS cache pressure error (Client failing to respond
to cache pressure).
If I try to increase the mds_cache_size to increase the number of inodes two
things happen:
1) inodes keep growing until I get to the limit again
2) the more inodes, mds runs out of memo
Thank you, Wido!
Anyway, it is the same story, if it cannot see the OSD's I cannot mount it :(
argh.
___
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
Hello guys,
Some information:
ceph version 10.2.1
72 OSD (24x per machine)
3 monitor
2 MDS
I have a few outside servers I need to connect CephFS to. My monitors have 2
interfaces, one private and one public (eth0 and eth1).
I am trying to mount CephFS via eth1 on monitor01 from an outside serve