Excuse me that I left you alone during the discussion yesterday!

Let me clarify: The scenario is that the client polls for events.
Thus, it sends a subsequent poll request once it got an answer from the server.
(And how should I bring Tomcat to create hundreds of request objects with four 
browsers, if each of them issues only a single request?)

In the log file I can see a very regular pattern (where request parameter 
count=80 is the client side request count
and (80) s the server side request count)

25.04.2007 19:41:53 comettest.CometServlet event
WARNUNG: BEGIN(80) POST /comettest/comet/request?action=poll&count=80
25.04.2007 19:41:53 comettest.CometServlet$EventProvider run
WARNUNG: queue size: 1
25.04.2007 19:41:53 comettest.CometServlet$EventProvider sendResponse
WARNUNG: RESPONSE(80)
25.04.2007 19:41:53 comettest.CometServlet closeEvent
WARNUNG: CLOSE(80) by response provider
25.04.2007 19:41:53 comettest.CometServlet$EventProvider run
WARNUNG: queue size: 0
25.04.2007 19:41:53 comettest.CometServlet$EventProvider run
WARNUNG: sleeping 750 millis
25.04.2007 19:41:54 comettest.CometServlet event
WARNUNG: END(80) POST /comettest/comet/request?action=poll&count=80
25.04.2007 19:41:54 comettest.CometServlet closeEvent
WARNUNG: CLOSE(80) by event processor
25.04.2007 19:41:54 comettest.CometServlet event
WARNUNG: BEGIN(81) POST /comettest/comet/request?action=poll&count=81

Now I look at this statement from a former post again:
> Processors are only recycled when the IO layer gets an event (read,
> disconnect, timeout), and if:
> - the event is a non timeout error
> - the event is closed by the servlet
> - the event has been closed asynchronously

I must admit that there is no read, disconnect, or timeout event - but there is 
an END event,
and as the CoyoteAdapter sets the type to END instead READ because the event 
was closed before,
I think the mentioned precondition for recycling is fulfilled.

The servlet was able to process 467 requests before the OutOfMemoryError 
occurred,
but in the memory dump I can see only 103 processor instances, thus the 
majority of the Processors was in fact recycled.
The mismatch might result from the fact the browser sometimes closed the 
connection in between, and sometimes did not - see below.

If I replace the event.close() in the response provider thread with an 
outputStream.close(), I also get an END event, regardless if I set Connection: 
close or not, and also with the NIOConnector.

But there is a difference: If the response provider thread sets the Connection: 
close header before closing the stream,
memory consumption is fairly stable for a long time (I let it run for over an 
hour and saw no significant increase in the Windows task manager), so let us 
assume that recycling works fine in that situation.


MY CONCLUSION: I have to closely couple the connection lifecycle with the 
request lifecycle to make things work.


However, this is bad news for me - I would like to keep connections open as 
long as possible,
because especially with an SSL connector opening a new connection after each 
poll request is expensive.
(And what about malicious clients that ignore the Connection: close ?)

If I try to keep the connection open (i.e. the content length in the response 
header and flush the OutputStream, but don't close it),
the client sends the subsequent request on the same connection.
The RequestProcessor for the old Request (yes, Request and not Connection - 
that's why it is called RequestProcessor and not ConnectionProcessor) issues a 
READ event in this case.
The Servlet tries to read and gets -1. This means that it does not actually 
read from the connection - otherwise it would read the unparsed subsequent 
request. (We could say that, upon a READ event, the servlet reads further 
portions of the request, it does not simple read from the connection).
As it got -1, the Servlet calls event.close() which means: I am done with this 
request (it does not mean: I am done with the connection).
Then a new (or a recycled) RequestProcessor is associated with the same 
connection and processes the BEGIN event for the new request, but it seems as 
if the old RequestProcessor is not always recycled.



I think that recycling could be organized much clearer with a small change in 
the execution model which would also have the benefit
of descibing the lifecycle of asynchronous requests more clearly:

- First of all, I would like to rename CometEvent into CometRequest, and let it 
be part of the contract between container and Servlet that one instance of this 
class represents a request during the whole request lifecycle.
(I.e. the Servlet is allowed to hold references to CometRequest objects for 
further processing)

- Recycling of a CometRequest (and all associated objects including the 
RequestProcessor) should be shifted completely to the CometRequest.close() 
method.

- The only restriction in that case would be that CometRequest.close then 
cannot be invoked from within the CometProcessor.event() method (Obviously the 
RequestProcessor cannot recycle itself while it is processing a request).

For regular request processing CometRequest.close will be invoked by a 
different thread anyway.
There could be some overhead for handling ERROR events as these must be passed 
to another thread for closing the CometRequest,
but hopefully we could get rid of most of such situations with that change.
E.g. I would expect that a READ event after sending the response (and keeping 
the connection alive)
would no longer happen if the sender of the response uses CometRequest.close, 
because the RequestProcessor would then be detached from the connection early 
enough.

There would surely be a need for READ events to enable the Servlet to trigger 
further processing by other threads
(i.e. another thread would actually read from the input stream and react to an 
IOException or a -1 response).

ERROR events will be required to enable the Servlet to trigger some cleanup in 
another thread (e.g. remove the CometRequest instance from internal queues 
...), including a call to CometRequest.close. 

The clear semantics of this approach would be: Processing is the same as for 
synchronous requests , except that the RequestProcessor thread is released from 
processing early, and another thread (or several threads that synchronize when 
accessing the request) continues processing at any time it wants, and the 
application fully controls the lifetime of CometRequest instances.


Matthias


-----Original Message-----
From: Rémy Maucherat [mailto:[EMAIL PROTECTED] 
Sent: Wednesday, April 25, 2007 6:42 PM
To: Tomcat Users List
Subject: Re: Memory Leak with Comet

On 4/25/07, Filip Hanik - Dev Lists <[EMAIL PROTECTED]> wrote:
> then it's normal behavior, he'll just have to wait for a timeout or the
> client will disconnect

I don't know for sure, of course, but it's my theory at the moment.

Rémy


---------------------------------------------------------------------
To start a new topic, e-mail: users@tomcat.apache.org
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]

Reply via email to