Thanks Mark.

The same application is running in a jetty9 server. And I ran a test for 5
hours with 300,000 requests (moving window of 9mins) with 10g of heap.
Jetty didn't crash with OOM. So I guess my application is not the source of
OOM.

I'm currently using tomcat 7.0.50 in production and it is doing well and I
don't want to migrate to jetty just for long polling (implemented using
AsyncResponse).

Any suggestions ??

Regards
Anurag
 On Aug 22, 2014 2:10 PM, "Mark Thomas" <ma...@apache.org> wrote:

> On 22/08/2014 06:03, anurag gupta wrote:
> >
> >
> > Hi All,
> >
> >  I'm trying to implement long polling using the servlet 3.0 spec.
> > Implementation wise it's done and works fine in tomcat. The problem
> occurs
> > when it is under load, for eg. when we send just 100,000 requests we see
> > weird behaviour like requests timeout before the defined timeout, Tomcat
> > goes OOM because of GC overhead limit exceeding.
>
> The root cause of the OOM is most likely your application rather than
> Tomcat.
>
> > I have tried this on 2 diff versions of tomcat (mentioned in subject).
> >
> > OS CentOS 6.5
> > Process memory 10g both Xmx and Xms
> >
> > So I have a question, upto how many concurrent open(idle) connections can
> > a tomcat instance handle ?
>
> As many as your operating system will allow. (Hint: It will be less than
> 100k).
>
> > How to achieve maximum idle connections ?
>
> Fix your application so it doesn't trigger an OOME.
>
> Tune your OS.
>
> Mark
>
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: users-unsubscr...@tomcat.apache.org
> For additional commands, e-mail: users-h...@tomcat.apache.org
>
>

Reply via email to