Memory usage is all that springs to mind. Our MDS server, with the 2
million inode cache is currently using 2GB RAM.
We haven't seen any problems regarding failover (we have 1 active and one
failover MDS).
Sean
On 9 June 2016 at 21:18, Elias Abacioglu
wrote:
> Hi Sean,
>
> Isn't there any down
Hi Sean,
Isn't there any downsides to increasing the mds cache size?
My colleague mentioned that he tested it previously and then the cluster
didn't recover during a failover..
On Thu, Jun 9, 2016 at 12:41 PM, Sean Crosby
wrote:
> Hi Elias,
>
> When we have received the same warning, our soluti
Hi Elias,
When we have received the same warning, our solution has been to increase
the inode cache on the MDS.
We have added
mds cache size = 200
to the [global] section of ceph.conf on the MDS server. We have to restart
MDS for the changes to be applied.
Sean
On 9 June 2016 at 19:55,
Hi,
I know this have been asked here a couple of times, but couldn't find
anything concrete.
I have the following warning in our ceph cluster.
mds0: Client web01:cephfs.web01 failing to respond to cache pressure
In previous Ceph versions this might have been a bug. But now we are
running Jewel.