Thanks again,

I was trying to use fuse client instead Ubuntu 16.04 kernel module to see
if maybe is a client side problem, but CPU usage on fuse client is very
high (a 100% and even more in a two cores machine), so I'd to rever to
kernel client that uses much less CPU.

Is a web server, so maybe the problem is that. PHP and Nginx should open a
lot of files and maybe that uses a lot of RAM.

For now I've rebooted the machine because is the only way to free the
memory, but I cannot restart the machine every few hours...

Greetings!!

2018-07-19 1:00 GMT+02:00 Gregory Farnum <gfar...@redhat.com>:

> Wow, yep, apparently the MDS has another 9GB of allocated RAM outside of
> the cache! Hopefully one of the current FS users or devs has some idea. All
> I can suggest is looking to see if there are a bunch of stuck requests or
> something that are taking up memory which isn’t properly counted.
>
> On Wed, Jul 18, 2018 at 3:48 PM Daniel Carrasco <d.carra...@i2tic.com>
> wrote:
>
>> Hello, thanks for your response.
>>
>> This is what I get:
>>
>> # ceph tell mds.kavehome-mgto-pro-fs01  heap stats
>> 2018-07-19 00:43:46.142560 7f5a7a7fc700  0 client.1318388 ms_handle_reset
>> on 10.22.0.168:6800/1129848128
>> 2018-07-19 00:43:46.181133 7f5a7b7fe700  0 client.1318391 ms_handle_reset
>> on 10.22.0.168:6800/1129848128
>> mds.kavehome-mgto-pro-fs01 tcmalloc heap stats:------------------------
>> ------------------------
>> MALLOC:     9982980144 ( 9520.5 MiB) Bytes in use by application
>> MALLOC: +            0 (    0.0 MiB) Bytes in page heap freelist
>> MALLOC: +    172148208 (  164.2 MiB) Bytes in central cache freelist
>> MALLOC: +     19031168 (   18.1 MiB) Bytes in transfer cache freelist
>> MALLOC: +     23987552 (   22.9 MiB) Bytes in thread cache freelists
>> MALLOC: +     20869280 (   19.9 MiB) Bytes in malloc metadata
>> MALLOC:   ------------
>> MALLOC: =  10219016352 ( 9745.6 MiB) Actual memory used (physical + swap)
>> MALLOC: +   3913687040 ( 3732.4 MiB) Bytes released to OS (aka unmapped)
>> MALLOC:   ------------
>> MALLOC: =  14132703392 (13478.0 MiB) Virtual address space used
>> MALLOC:
>> MALLOC:          63875              Spans in use
>> MALLOC:             16              Thread heaps in use
>> MALLOC:           8192              Tcmalloc page size
>> ------------------------------------------------
>> Call ReleaseFreeMemory() to release freelist memory to the OS (via
>> madvise()).
>> Bytes released to the OS take up virtual address space but no physical
>> memory.
>>
>>
>> I've tried the release command but it keeps using the same memory.
>>
>> greetings!
>>
>>
>> 2018-07-19 0:25 GMT+02:00 Gregory Farnum <gfar...@redhat.com>:
>>
>>> The MDS think it's using 486MB of cache right now, and while that's
>>> not a complete accounting (I believe you should generally multiply by
>>> 1.5 the configured cache limit to get a realistic memory consumption
>>> model) it's obviously a long way from 12.5GB. You might try going in
>>> with the "ceph daemon" command and looking at the heap stats (I forget
>>> the exact command, but it will tell you if you run "help" against it)
>>> and seeing what those say — you may have one of the slightly-broken
>>> base systems and find that running the "heap release" (or similar
>>> wording) command will free up a lot of RAM back to the OS!
>>> -Greg
>>>
>>> On Wed, Jul 18, 2018 at 1:53 PM, Daniel Carrasco <d.carra...@i2tic.com>
>>> wrote:
>>> > Hello,
>>> >
>>> > I've created a 3 nodes cluster with MON, MGR, OSD and MDS on all (2 MDS
>>> > actives), and I've noticed that MDS is using a lot of memory (just now
>>> is
>>> > using 12.5GB of RAM):
>>> > # ceph daemon mds.kavehome-mgto-pro-fs01 dump_mempools | jq -c
>>> '.mds_co';
>>> > ceph daemon mds.kavehome-mgto-pro-fs01 perf dump | jq '.mds_mem.rss'
>>> > {"items":9272259,"bytes":510032260}
>>> > 12466648
>>> >
>>> > I've configured the limit:
>>> > mds_cache_memory_limit = 536870912
>>> >
>>> > But looks like is ignored, because is about 512Mb and is using a lot
>>> more.
>>> >
>>> > Is there any way to limit the memory usage of MDS, because is giving a
>>> lot
>>> > of troubles because start to swap.
>>> > Maybe I've to limit the cached inodes?
>>> >
>>> > The other active MDS is using a lot less memory (2.5Gb). but also is
>>> using
>>> > more than 512Mb. The standby MDS is not using memory it all.
>>> >
>>> > I'm using the version:
>>> > ceph version 12.2.7 (3ec878d1e53e1aeb47a9f619c49d9e7c0aa384d5)
>>> luminous
>>> > (stable).
>>> >
>>> > Thanks!!
>>> > --
>>> > _________________________________________
>>> >
>>> >       Daniel Carrasco Marín
>>> >       Ingeniería para la Innovación i2TIC, S.L.
>>> >       Tlf:  +34 911 12 32 84 Ext: 223
>>> >       www.i2tic.com
>>> > _________________________________________
>>> >
>>> >
>>> >
>>> > --
>>> > _________________________________________
>>> >
>>> >       Daniel Carrasco Marín
>>> >       Ingeniería para la Innovación i2TIC, S.L.
>>> >       Tlf:  +34 911 12 32 84 Ext: 223
>>> >       www.i2tic.com
>>> > _________________________________________
>>> >
>>> > _______________________________________________
>>> > ceph-users mailing list
>>> > ceph-users@lists.ceph.com
>>> > http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>>> >
>>>
>>
>>
>>
>> --
>> _________________________________________
>>
>>       Daniel Carrasco Marín
>>       Ingeniería para la Innovación i2TIC, S.L.
>>       Tlf:  +34 911 12 32 84 Ext: 223
>>       www.i2tic.com
>> _________________________________________
>>
>


-- 
_________________________________________

      Daniel Carrasco Marín
      Ingeniería para la Innovación i2TIC, S.L.
      Tlf:  +34 911 12 32 84 Ext: 223
      www.i2tic.com
_________________________________________
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to