Hi,
I forgot to say that maybe the Diff is lower than real (8Mb), because the
memory usage was still high and i've prepared a new configuration with
lower limit (5Mb). I've not reloaded the daemons for now, but maybe the
configuration was loaded again today and that's the reason why is using
less
Hello again,
It is still early to say that is working fine now, but looks like the MDS
memory is now under 20% of RAM and the most of time between 6-9%. Maybe was
a mistake on configuration.
As appointment, I've changed this client config:
[global]
...
bluestore_cache_size_ssd = 805306360
bluesto
Hello,
Finally I've to remove CephFS and use a simple NFS, because the MDS daemon
starts to use a lot of memory and is unstable. After reboot one node
because it started to swap (the cluster will be able to survive without a
node), the cluster goes down because one of the other MDS starts to use
a
Thanks again,
I was trying to use fuse client instead Ubuntu 16.04 kernel module to see
if maybe is a client side problem, but CPU usage on fuse client is very
high (a 100% and even more in a two cores machine), so I'd to rever to
kernel client that uses much less CPU.
Is a web server, so maybe t
Wow, yep, apparently the MDS has another 9GB of allocated RAM outside of
the cache! Hopefully one of the current FS users or devs has some idea. All
I can suggest is looking to see if there are a bunch of stuck requests or
something that are taking up memory which isn’t properly counted.
On Wed, Ju
Hello, thanks for your response.
This is what I get:
# ceph tell mds.kavehome-mgto-pro-fs01 heap stats
2018-07-19 00:43:46.142560 7f5a7a7fc700 0 client.1318388 ms_handle_reset
on 10.22.0.168:6800/1129848128
2018-07-19 00:43:46.181133 7f5a7b7fe700 0 client.1318391 ms_handle_reset
on 10.22.0.168
The MDS think it's using 486MB of cache right now, and while that's
not a complete accounting (I believe you should generally multiply by
1.5 the configured cache limit to get a realistic memory consumption
model) it's obviously a long way from 12.5GB. You might try going in
with the "ceph daemon"
Hello,
I've created a 3 nodes cluster with MON, MGR, OSD and MDS on all (2 MDS
actives), and I've noticed that MDS is using a lot of memory (just now is
using 12.5GB of RAM):
# ceph daemon mds.kavehome-mgto-pro-fs01 dump_mempools | jq -c '.mds_co';
ceph daemon mds.kavehome-mgto-pro-fs01 perf dump