Hello, just to report,
Looks like change the message type to simple help to avoid the memory leak.
Just about a day later the memory still OK:
1264 ceph 20 0 12,547g 1,247g 16652 S 3,3 8,2 110:16.93
ceph-mds
The memory usage is more than 2x of MDS limit (512Mb), but maybe is the
da
I've changed the configuration adding your line and changing the mds memory
limit to 512Mb, and for now looks stable (its on about 3-6% and sometimes
even below 3%). I've got a very high usage on boot:
1264 ceph 20 0 12,543g 6,251g 16184 S 2,0 41,1% 0:19.34 ceph-mds
but now looks accep
Hello,
Thanks for all your help.
The dd is an option of any command?, because at least on Debian/Ubuntu is
an aplication to copy blocks, and then fails.
For now I cannot change the configuration, but later I'll try.
About the logs, I've not seen nothing about "warning", "error", "failed",
"messag
On Wed, Jul 25, 2018 at 8:12 PM Yan, Zheng wrote:
>
> On Wed, Jul 25, 2018 at 5:04 PM Daniel Carrasco wrote:
> >
> > Hello,
> >
> > I've attached the PDF.
> >
> > I don't know if is important, but I made changes on configuration and I've
> > restarted the servers after dump that heap file. I've
On Wed, Jul 25, 2018 at 5:04 PM Daniel Carrasco wrote:
>
> Hello,
>
> I've attached the PDF.
>
> I don't know if is important, but I made changes on configuration and I've
> restarted the servers after dump that heap file. I've changed the
> memory_limit to 25Mb to test if stil with aceptable va
Hello,
I've run the profiler for about 5-6 minutes and this is what I've got:
-
On Tue, Jul 24, 2018 at 4:59 PM Daniel Carrasco wrote:
>
> Hello,
>
> How many time is neccesary?, because is a production environment and memory
> profiler + low cache size because the problem, gives a lot of CPU usage from
> OSD and MDS that makes it fails while profiler is running. Is there a
Hello,
How many time is neccesary?, because is a production environment and memory
profiler + low cache size because the problem, gives a lot of CPU usage
from OSD and MDS that makes it fails while profiler is running. Is there
any problem if is done in a low traffic time? (less usage and maybe it
I mean:
ceph tell mds.x heap start_profiler
... wait for some time
ceph tell mds.x heap stop_profiler
pprof --text /usr/bin/ceph-mds
/var/log/ceph/ceph-mds.x.profile..heap
On Tue, Jul 24, 2018 at 3:18 PM Daniel Carrasco wrote:
>
> This is what i get:
>
> --
This is what i get:
:/# ceph tell mds.kavehome-mgto-pro-fs01 heap dump
2018-07-24 09:05:19.350720 7fc562ffd700 0 client.145254
could you profile memory allocation of mds
http://docs.ceph.com/docs/mimic/rados/troubleshooting/memory-profiling/
On Tue, Jul 24, 2018 at 7:54 AM Daniel Carrasco wrote:
>
> Yeah, is also my thread. This thread was created before lower the cache size
> from 512Mb to 8Mb. I thought that maybe was
Yeah, is also my thread. This thread was created before lower the cache
size from 512Mb to 8Mb. I thought that maybe was my fault and I did a
misconfiguration, so I've ignored the problem until now.
Greetings!
El mar., 24 jul. 2018 1:00, Gregory Farnum escribió:
> On Mon, Jul 23, 2018 at 11:08
On Mon, Jul 23, 2018 at 11:08 AM Patrick Donnelly
wrote:
> On Mon, Jul 23, 2018 at 5:48 AM, Daniel Carrasco
> wrote:
> > Hi, thanks for your response.
> >
> > Clients are about 6, and 4 of them are the most of time on standby. Only
> two
> > are active servers that are serving the webpage. Also
Hi,
I forgot to say that maybe the Diff is lower than real (8Mb), because the
memory usage was still high and i've prepared a new configuration with
lower limit (5Mb). I've not reloaded the daemons for now, but maybe the
configuration was loaded again today and that's the reason why is using
less
Thanks!,
It's true that I've seen a continuous memory growth, but I've not thought
in a memory leak. I don't remember exactly how many hours were neccesary to
fill the memory, but I calculate that were about 14h.
With the new configuration looks like memory grows slowly and when it
reaches 5-6 GB
On Mon, Jul 23, 2018 at 5:48 AM, Daniel Carrasco wrote:
> Hi, thanks for your response.
>
> Clients are about 6, and 4 of them are the most of time on standby. Only two
> are active servers that are serving the webpage. Also we've a varnish on
> front, so are not getting all the load (below 30% in
Hi, thanks for your response.
Clients are about 6, and 4 of them are the most of time on standby. Only
two are active servers that are serving the webpage. Also we've a varnish
on front, so are not getting all the load (below 30% in PHP is not much).
About the MDS cache, now I've the mds_cache_mem
Hi,
do you happen to have a relatively large number of clients and a relatively
small cache size on the MDS?
Paul
2018-07-23 13:16 GMT+02:00 Daniel Carrasco :
> Hello,
>
> I've created a Ceph cluster of 3 nodes (3 mons, 3 osd, 3 mgr and 3 mds
> with two active). This cluster is for mainly for
Hello,
I've created a Ceph cluster of 3 nodes (3 mons, 3 osd, 3 mgr and 3 mds with
two active). This cluster is for mainly for server a webpage (small files)
and is configured to have three copies of files (a copy on every OSD).
My question is about ceph.fuse clients: I've noticed an insane CPU us
19 matches
Mail list logo