Great.

Do you know valgrind? If so, you could run rsyslog under valgrind control,
best in the forground. When you terminate rsyslog, valgrind will show leak
stats of any.

Rainer

Sent from phone, thus brief.

Adriaan de Waal <[email protected]> schrieb am Mi., 6. Dez. 2023,
19:56:

> Good day
>
> Looking at the main Q statistics, the size remains mostly constant around
> 30. The max queue size currently sits at 400. There is also a queue (linked
> list + disk assisted) configured for the omkafka action, with the size not
> really going above single digits (and the DA queue stats remain at 0). Also
> note I completely disabled the omkafka action's queue previously as a test,
> but that didn't make a difference. There are no other queues.
>
> Kind Regards
> ---
>
> ------------------------------
> *From:* Rainer Gerhards <[email protected]>
> *Sent:* 06 December 2023 17:05
> *To:* rsyslog-users <[email protected]>
> *Cc:* Adriaan de Waal <[email protected]>
> *Subject:* Re: [rsyslog] Memory Leak?
>
> Look at the queue sizes in impstats. Are they ever-increasing?
>
> Rainer
>
> El mié, 6 dic 2023 a las 14:30, Adriaan de Waal via rsyslog
> (<[email protected]>) escribió:
> >
> > Good day
> >
> > I am trying to diagnose and resolve an issue whereby the memory consumed
> by the rsyslog daemon increases linearly over time. This continues until it
> consumes most of the memory (including swap) of the system and the service
> has to be restarted to free up memory. There are two servers with identical
> configurations. What I noticed is that the server receiving a higher volume
> of messages also consumes memory at a higher rate. In other word it appears
> as if the message rate, or message volume, is directly proportional to the
> rate at which memory is consumed.
> >
> > Below is the version information for the rsyslogd daemon:
> > rsyslogd  8.2310.0 (aka 2023.10) compiled with:
> >        PLATFORM:                               x86_64-pc-linux-gnu
> >        PLATFORM (lsb_release -d):
> >        FEATURE_REGEXP:                         Yes
> >        GSSAPI Kerberos 5 support:              No
> >        FEATURE_DEBUG (debug build, slow code): No
> >        32bit Atomic operations supported:      Yes
> >        64bit Atomic operations supported:      Yes
> >        memory allocator:                       system default
> >        Runtime Instrumentation (slow code):    No
> >        uuid support:                           Yes
> >        systemd support:                        Yes
> >        Config file:                            /etc/rsyslog.conf
> >        PID file:                               /var/run/rsyslogd.pid
> >        Number of Bits in RainerScript integers: 64
> >
> > It is running on Debian 12 servers.
> >
> > To provide you with more background detail, initially I configured three
> listeners: one UDP (port 514), one TCP (port 514) and one TLS (port 6514).
> A single system was configured to push logs to the TLS port and that worked
> well (no increase in memory usage over time). Recently I added another UDP
> listener (port 10514) and started configured a number of systems to push
> their logs to this port, but since then I've observed the described gradual
> memory increase.
> >
> > This new listener is configured as follows: A ruleset was created and
> bound to this listener (the ruleset doesn't have its own queue). The
> ruleset first runs the mmutf8fix action then calls a different ruleset
> (named "normalise"), which normalises the data (just sets specific
> variables that is later used in a template to construct a JSON object).
> After the call to the "normalise" ruleset returns, a mmnormalize action is
> performed and some additional variables are set. Lastly the ruleset (the
> one bound to the listener) then calls yet another ruleset (named
> "kafka_output"), which is used to construct a JSON object from the various
> variables and uses the omkafka action to push this to a Kafka cluster.
> >
> > The flow of the above can be visualised as:
> > Source -> Syslog Server [10514/UDP] -> [listener ruleset] -> [normalise
> ruleset] -> [kafka_output ruleset]
> >
> > It should also be noted the original listeners are configured in much
> the same way, apart from having calls to even more rulesets. I haven't
> tested if the UDP listener on port 514 exhibits the same behaviour (it
> isn't currently being used).
> >
> > This rsyslog daemon is also used to capture locally generated logs and
> the statistics (impstats) module is also loaded.
> >
> > What can I do to troubleshoot what's causing this "memory leak"?
> >
> > Kind Regards
> > ---
> >
> > _______________________________________________
> > rsyslog mailing list
> > https://lists.adiscon.net/mailman/listinfo/rsyslog
> > http://www.rsyslog.com/professional-services/
> > What's up with rsyslog? Follow https://twitter.com/rgerhards
> > NOTE WELL: This is a PUBLIC mailing list, posts are ARCHIVED by a myriad
> of sites beyond our control. PLEASE UNSUBSCRIBE and DO NOT POST if you
> DON'T LIKE THAT.
>
_______________________________________________
rsyslog mailing list
https://lists.adiscon.net/mailman/listinfo/rsyslog
http://www.rsyslog.com/professional-services/
What's up with rsyslog? Follow https://twitter.com/rgerhards
NOTE WELL: This is a PUBLIC mailing list, posts are ARCHIVED by a myriad of 
sites beyond our control. PLEASE UNSUBSCRIBE and DO NOT POST if you DON'T LIKE 
THAT.

Reply via email to