sizes.
>>
>> The main question is where these 0.6GB is coming from if I
>specifically
>> set Artemis heap size to 2G? Is this somekind of direct memory usage
>in the
>> broker? I don't believe that this could be just JVM overhead.
>>
>> --
>> Vilius
>>
&
OOM Killer happens when Linux determines it doesn't have enough memory to keep
going, so in an act of self preservation it kills a large memory consuming
process rather than crash.
So in your case we can see from the OOM Killer log that in your case you had
2627224kB resident which I think is cl
thout a timeout to a remote resource like a REST API or something).
What log messages or logging can help me prove one way or another what is
happening?
It's impossible to say at this point without more knowledge of what
protocol(s) your clients are using.
Justin
On Tue, Nov 7, 2023 at 7:52 PM
Hey all,
I could use a push in the right direction to troubleshoot an issue!
TL;DR
After running really well for a seemingly indeterminate period of time
(from hours to days), message delivery stops to connected consumers that
are located within the same JVM as the Artemis server. Producers in
209385761 | john.lil...@redpointglobal.com
-----Original Message-
From: David Bennion
Sent: Sunday, March 19, 2023 6:09 PM
To: users@activemq.apache.org
Cc: Lewis Gass
Subject: Re: Please help finding lost packet
*** [Caution] This email is from an external source. Please use
caution responding, ope
Those errors at the end feel awfully similar to an issue that we ran into
recently.
We resolved by disabling flow control by setting consumerWindowSize and
producerWindowSize both to -1 as I recall. Our end conclusion was that the
flow control stuff in the broker was getting confused and then
I would like a Slack invitation, if I may.
Cheers,
David.