Hey Mark,

*Are you using h2c or h2 in your test?*
We are using h2c


*Do you see the same issue if you switch the the NIO connector? Note
performance differences between NIO and NIO2 are very small.*

I havent tried with NIO honestly. Can quickly execute and check. Will
update the results.

*How long does a single request take to process?*
In normal scenarios, less than 3ms.

Thanks,
Chirag

On Fri, Jun 26, 2020 at 3:26 PM Mark Thomas <ma...@apache.org> wrote:

> Hi,
>
> Thanks for the additional information. The GC roots were particularly
> informative.
>
> Those RequestInfo objects are associated with HTTP/1.1 requests, not
> HTTP/2 requests.
>
> Some further questions to try and track down what is going on:
>
> - Are you using h2c or h2 in your test?
>
> - Do you see the same issue if you switch the the NIO connector? Note
>   performance differences between NIO and NIO2 are very small.
>
> - How long does a single request take to process?
>
> Thanks,
>
> Mark
>
> On 26/06/2020 09:24, Chirag Dewan wrote:
> > Thanks Mark.
> >
> > *What is the typical response size for one of these requests? *
> > It' basically a dummy response JSON of ~300bytes. I expect 2300bytes of
> > response in my production use case, but the purpose here was to isolate
> as
> > many things as possible. Hence a dummy response.
> >
> > *How long does a typical test take to process? *
> > I see Tomcat's memory reaching 28G in about 2hours at 19K TPS. The number
> > of streams on my client was 500 - which means about 40 connections per
> > second.
> >
> > * What are the GC roots for those RequestInfo objects?*
> > https://ibb.co/fMRmCXZ
> >
> > I hope I was able to answer everything as expected. Thanks.
> >
> > On Thu, Jun 25, 2020 at 8:30 PM Mark Thomas <ma...@apache.org> wrote:
> >
> >> Thanks.
> >>
> >> I've looked at the code and I have tried various tests but I am unable
> >> to re-create a memory leak.
> >>
> >> The code used to (before I made a few changes this afternoon) retain a
> >> lot more memory per Stream and it is possible that what you are seeing
> >> is a system that doesn't have enough memory to achieve steady state.
> >>
> >> If you are able to build the latest 9.0.x and test that, that could be
> >> helpful. Alternatively, I could provide a test build for you to
> >> experiment with.
> >>
> >> Some additional questions that might aid understanding:
> >>
> >> - What is the typical response size for one of these requests?
> >> - How long does a typical test take to process?
> >> - What are the GC roots for those RequestInfo objects?
> >>
> >> Thanks again,
> >>
> >> Mark
> >>
> >>
> >>
> >>
> >> On 25/06/2020 15:10, Chirag Dewan wrote:
> >>> Hi Mark,
> >>>
> >>> Its the default APR connector with 150 Threads.
> >>>
> >>> Chirag
> >>>
> >>> On Thu, 25 Jun, 2020, 7:30 pm Mark Thomas, <ma...@apache.org> wrote:
> >>>
> >>>> On 25/06/2020 11:00, Chirag Dewan wrote:
> >>>>> Thanks for the quick check Mark.
> >>>>>
> >>>>> These are the images I tried referring to:
> >>>>>
> >>>>> https://ibb.co/LzKtRgh
> >>>>>
> >>>>> https://ibb.co/2s7hqRL
> >>>>>
> >>>>> https://ibb.co/KmKj590
> >>>>>
> >>>>>
> >>>>> The last one is the MAT screenshot showing many RequestInfo objects.
> >>>>
> >>>> Thanks. That certainly looks like a memory leak. I'll take a closer
> >>>> look. Out of interest, how many threads is the Connector configured to
> >> use?
> >>>>
> >>>> Mark
> >>>>
> >>>>
> >>>>>
> >>>>>
> >>>>> Thanks,
> >>>>>
> >>>>> Chirag
> >>>>>
> >>>>> On Wed, Jun 24, 2020 at 8:30 PM Mark Thomas <ma...@apache.org>
> wrote:
> >>>>>
> >>>>>> On 24/06/2020 12:17, Mark Thomas wrote:
> >>>>>>> On 22/06/2020 11:06, Chirag Dewan wrote:
> >>>>>>>> Hi,
> >>>>>>>>
> >>>>>>>> Update: We found that Tomcat goes OOM when a client closes and
> opens
> >>>> new
> >>>>>>>> connections every second. In the memory dump, we see a lot of
> >>>>>>>> RequestInfo objects that are causing the memory spike.
> >>>>>>>>
> >>>>>>>> After a while, Tomcat goes OOM and start rejecting request(I get a
> >>>>>>>> request timed out on my client). This seems like a bug to me.
> >>>>>>>>
> >>>>>>>> For better understanding, let me explain my use case again:
> >>>>>>>>
> >>>>>>>> I have a jetty client that sends HTTP2 requests to Tomcat. My
> >>>>>>>> requirement is to close a connection after a configurable(say
> 5000)
> >>>>>>>> number of requests/streams and open a new connection that
> continues
> >> to
> >>>>>>>> send requests. I close a connection by sending a GoAway frame.
> >>>>>>>>
> >>>>>>>> When I execute this use case under load, I see that after ~2hours
> my
> >>>>>>>> requests fail and I get a series of errors like request
> >>>>>>>> timeouts(5seconds), invalid window update frame, and connection
> >> close
> >>>>>>>> exception on my client.
> >>>>>>>> On further debugging, I found that it's a Tomcat memory problem
> and
> >> it
> >>>>>>>> goes OOM after sometime under heavy load with multiple connections
> >>>> being
> >>>>>>>> re-established by the clients.
> >>>>>>>>
> >>>>>>>> image.png
> >>>>>>>>
> >>>>>>>> image.png
> >>>>>>>>
> >>>>>>>> Is this a known issue? Or a known behavior with Tomcat?
> >>>>>>>
> >>>>>>> Embedded images get dropped by the list software. Post those images
> >>>>>>> somewhere we can see them.
> >>>>>>>
> >>>>>>>> Please let me know if you any experience with such a situation.
> >> Thanks
> >>>>>>>> in advance.
> >>>>>>>
> >>>>>>> Nothing comes to mind.
> >>>>>>>
> >>>>>>> I'll try some simple tests with HTTP/2.
> >>>>>>
> >>>>>> I don't see a memory leak (the memory is reclaimed eventually) but I
> >> do
> >>>>>> see possibilities to release memory associated with request
> processing
> >>>>>> sooner.
> >>>>>>
> >>>>>> Right now you need to allocate more memory to the Java process to
> >> enable
> >>>>>> Tomcat to handle the HTTP/2 load it is presented with.
> >>>>>>
> >>>>>> It looks like a reasonable chunk of memory is released when the
> >>>>>> Connection closes that could be released earlier when the associated
> >>>>>> Stream closes. I'll take a look at what can be done in that area. In
> >> the
> >>>>>> meantime, reducing the number of Streams you allow on a Connection
> >>>>>> before it is closed should reduce overall memory usage.
> >>>>>>
> >>>>>> Mark
> >>>>>>
> >>>>>>
> ---------------------------------------------------------------------
> >>>>>> To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
> >>>>>> For additional commands, e-mail: users-h...@tomcat.apache.org
> >>>>>>
> >>>>>>
> >>>>>
> >>>>
> >>>>
> >>>> ---------------------------------------------------------------------
> >>>> To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
> >>>> For additional commands, e-mail: users-h...@tomcat.apache.org
> >>>>
> >>>>
> >>>
> >>
> >>
> >> ---------------------------------------------------------------------
> >> To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
> >> For additional commands, e-mail: users-h...@tomcat.apache.org
> >>
> >>
> >
>
>
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
> For additional commands, e-mail: users-h...@tomcat.apache.org
>
>

Reply via email to