Memory usage is all that springs to mind. Our MDS server, with the 2
million inode cache is currently using 2GB RAM.

We haven't seen any problems regarding failover (we have 1 active and one
failover MDS).

Sean

On 9 June 2016 at 21:18, Elias Abacioglu <elias.abacio...@deltaprojects.com>
wrote:

> Hi Sean,
>
> Isn't there any downsides to increasing the mds cache size?
> My colleague mentioned that he tested it previously and then the cluster
> didn't recover during a failover..
>
> On Thu, Jun 9, 2016 at 12:41 PM, Sean Crosby <richardnixonsh...@gmail.com>
> wrote:
>
>> Hi Elias,
>>
>> When we have received the same warning, our solution has been to increase
>> the inode cache on the MDS.
>>
>> We have added
>>
>> mds cache size = 2000000
>>
>>
>> to the [global] section of ceph.conf on the MDS server. We have to
>> restart MDS for the changes to be applied.
>>
>> Sean
>>
>>
>> On 9 June 2016 at 19:55, Elias Abacioglu <
>> elias.abacio...@deltaprojects.com> wrote:
>>
>>> Hi,
>>>
>>> I know this have been asked here a couple of times, but couldn't find
>>> anything concrete.
>>>
>>> I have the following warning in our ceph cluster.
>>> mds0: Client web01:cephfs.web01 failing to respond to cache pressure
>>>
>>> In previous Ceph versions this might have been a bug. But now we are
>>> running Jewel.
>>> So is there a way to fix this warning?
>>> Do I need to tune some values? Boost the cluster? Boost the client?
>>>
>>> Here are some details:
>>> Client kernel is 4.4.0.
>>> Ceph 10.2.1
>>>
>>> # ceph mds dump
>>> dumped fsmap epoch 5755
>>> fs_name    cephfs
>>> epoch    5755
>>> flags    0
>>> created    2015-12-03 11:21:28.128193
>>> modified    2016-05-16 06:48:47.969430
>>> tableserver    0
>>> root    0
>>> session_timeout    60
>>> session_autoclose    300
>>> max_file_size    1099511627776
>>> last_failure    4900
>>> last_failure_osd_epoch    5884
>>> compat    compat={},rocompat={},incompat={1=base v0.20,2=client
>>> writeable ranges,3=default file layouts on dirs,4=dir inode in separate
>>> object,5=mds uses versioned encoding,6=dirfrag is stored in omap,8=no
>>> anchor table}
>>> max_mds    1
>>> in    0
>>> up    {0=574261}
>>> failed
>>> damaged
>>> stopped
>>> data_pools    2
>>> metadata_pool    3
>>> inline_data    disabled
>>> 574261:    10.3.215.5:6801/62035 'ceph-mds03' mds.0.5609 up:active seq
>>> 515014
>>> 594257:    10.3.215.10:6800/1386 'ceph-mds04' mds.0.0 up:standby-replay
>>> seq 1
>>>
>>> Thanks,
>>> Elias
>>>
>>> _______________________________________________
>>> ceph-users mailing list
>>> ceph-users@lists.ceph.com
>>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>>>
>>>
>>
>
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to