Thanks a lot, Mark.
https://ibb.co/LgzFh6t - Memory snapshot after 15minutes of the test.
It's certainly better than the graph with 9.0.36, but I will wait for this
test to run for another few hours. I will update later.
Cheers,
Chirag
On Fri, Jun 26, 2020 at 6:20 PM Mark Thomas wrote:
> On 26
On 26/06/2020 12:48, Mark Thomas wrote:
> On 26/06/2020 12:45, Chirag Dewan wrote:
>> Absolutely Mark. Shouldn't take long.
>
> Great. I think I have found a potential root cause. If I am right, NIO
> will show the same issues NIO2 did.
>
> I should have a test build for you shortly.
Try this:
h
On 26/06/2020 12:45, Chirag Dewan wrote:
> Absolutely Mark. Shouldn't take long.
Great. I think I have found a potential root cause. If I am right, NIO
will show the same issues NIO2 did.
I should have a test build for you shortly.
Mark
>
> On Fri, 26 Jun, 2020, 4:16 pm Mark Thomas, wrote:
>
Absolutely Mark. Shouldn't take long.
On Fri, 26 Jun, 2020, 4:16 pm Mark Thomas, wrote:
> Aha!
>
> h2c could be the significant factor here. Let me take a look.
>
> Are you in a position to test against a dev build if the need arises?
>
> Mark
>
>
> On 26/06/2020 11:30, Chirag Dewan wrote:
> > H
Aha!
h2c could be the significant factor here. Let me take a look.
Are you in a position to test against a dev build if the need arises?
Mark
On 26/06/2020 11:30, Chirag Dewan wrote:
> Hey Mark,
>
> *Are you using h2c or h2 in your test?*
> We are using h2c
>
>
> *Do you see the same issue
Hey Mark,
*Are you using h2c or h2 in your test?*
We are using h2c
*Do you see the same issue if you switch the the NIO connector? Note
performance differences between NIO and NIO2 are very small.*
I havent tried with NIO honestly. Can quickly execute and check. Will
update the results.
*How l
Hi,
Thanks for the additional information. The GC roots were particularly
informative.
Those RequestInfo objects are associated with HTTP/1.1 requests, not
HTTP/2 requests.
Some further questions to try and track down what is going on:
- Are you using h2c or h2 in your test?
- Do you see the s
Thanks Mark.
*What is the typical response size for one of these requests? *
It' basically a dummy response JSON of ~300bytes. I expect 2300bytes of
response in my production use case, but the purpose here was to isolate as
many things as possible. Hence a dummy response.
*How long does a typical
Thanks.
I've looked at the code and I have tried various tests but I am unable
to re-create a memory leak.
The code used to (before I made a few changes this afternoon) retain a
lot more memory per Stream and it is possible that what you are seeing
is a system that doesn't have enough memory to a
Hi Mark,
Its the default APR connector with 150 Threads.
Chirag
On Thu, 25 Jun, 2020, 7:30 pm Mark Thomas, wrote:
> On 25/06/2020 11:00, Chirag Dewan wrote:
> > Thanks for the quick check Mark.
> >
> > These are the images I tried referring to:
> >
> > https://ibb.co/LzKtRgh
> >
> > https://ib
On 25/06/2020 11:00, Chirag Dewan wrote:
> Thanks for the quick check Mark.
>
> These are the images I tried referring to:
>
> https://ibb.co/LzKtRgh
>
> https://ibb.co/2s7hqRL
>
> https://ibb.co/KmKj590
>
>
> The last one is the MAT screenshot showing many RequestInfo objects.
Thanks. That
Thanks for the quick check Mark.
These are the images I tried referring to:
https://ibb.co/LzKtRgh
https://ibb.co/2s7hqRL
https://ibb.co/KmKj590
The last one is the MAT screenshot showing many RequestInfo objects.
Thanks,
Chirag
On Wed, Jun 24, 2020 at 8:30 PM Mark Thomas wrote:
> On 24
On 24/06/2020 12:17, Mark Thomas wrote:
> On 22/06/2020 11:06, Chirag Dewan wrote:
>> Hi,
>>
>> Update: We found that Tomcat goes OOM when a client closes and opens new
>> connections every second. In the memory dump, we see a lot of
>> RequestInfo objects that are causing the memory spike.
>>
>> A
On 22/06/2020 11:06, Chirag Dewan wrote:
> Hi,
>
> Update: We found that Tomcat goes OOM when a client closes and opens new
> connections every second. In the memory dump, we see a lot of
> RequestInfo objects that are causing the memory spike.
>
> After a while, Tomcat goes OOM and start rejecti
Hi,
Update: We found that Tomcat goes OOM when a client closes and opens new
connections every second. In the memory dump, we see a lot of RequestInfo
objects that are causing the memory spike.
After a while, Tomcat goes OOM and start rejecting request(I get a
request timed out on my client). Thi
Hi,
This is without load balancer actually. I am directly sending to Tomcat.
Update:
A part issue I found was to be 9.0.29. I observed that when request were
timed out on client (2seconds), the client would send a RST frame. And the
GoAway from Tomcat was perhaps a bug. In 9.0.36, RST frame is r
Am 2020-06-13 um 08:42 schrieb Chirag Dewan:
Hi,
We are observing that under high load, my clients start receiving a GoAway
frame with error:
*Connection[{id}], Stream[{id}] an error occurred during processing that
was fatal to the connection.*
Background : We have implemented our clients to c
17 matches
Mail list logo