Re: [ceph-users] Ceph MDS and hard links

2018-08-07 Thread Benjeman Meekhof
I switched configs to use ms_type: simple and restarted all of our MDS (there are 3 but only 1 active). It looks like the memory usage crept back up to the same levels as before. I've included new mempool dump and heap stat. If I can provide other debug info let me know. ceph daemon mds.xxx co

Re: [ceph-users] Ceph MDS and hard links

2018-08-03 Thread Yan, Zheng
On Fri, Aug 3, 2018 at 8:53 PM Benjeman Meekhof wrote: > > Thanks, that's useful to know. I've pasted the output you asked for > below, thanks for taking a look. > > Here's the output of dump_mempools: > > { > "mempool": { > "by_pool": { > "bloom_filter": { >

Re: [ceph-users] Ceph MDS and hard links

2018-08-03 Thread Benjeman Meekhof
Thanks, that's useful to know. I've pasted the output you asked for below, thanks for taking a look. Here's the output of dump_mempools: { "mempool": { "by_pool": { "bloom_filter": { "items": 4806709, "bytes": 4806709 },

Re: [ceph-users] Ceph MDS and hard links

2018-08-01 Thread Yan, Zheng
On Thu, Aug 2, 2018 at 3:36 AM Benjeman Meekhof wrote: > > I've been encountering lately a much higher than expected memory usage > on our MDS which doesn't align with the cache_memory limit even > accounting for potential over-runs. Our memory limit is 4GB but the > MDS process is steadily at ar

[ceph-users] Ceph MDS and hard links

2018-08-01 Thread Benjeman Meekhof
I've been encountering lately a much higher than expected memory usage on our MDS which doesn't align with the cache_memory limit even accounting for potential over-runs. Our memory limit is 4GB but the MDS process is steadily at around 11GB used. Coincidentally we also have a new user heavily re