Eric
Good point. Further investigation (snooping with tcpdump) shows that
the browser (Firefox 3.5.9) sometimes retries the request. The number
of retries appears random.
I don't think we've client-side (javascript) code retrying requests,
but I don't know.
BR
A
PS. Note that for wget:
On Wed, Mar 31, 2010 at 7:55 PM, ARTHUR GOLDBERG wrote:
> httpd processes die as expected when their VM size reaches 1000 MB.
>
> But here's the problem. After the httpd serving the Request dies, a new one
> is created to handle the same request. And so on.
> I think this is all done in Apache, a
Hi us...@httpd
As I mentioned in an earlier email, we're running mod_perl on Apache
(2.2) on RHEL, using the prefork MPM. I want to protect my server
against Perl processes that grow much too large, as they can slow or
even freeze the system.
Therefore, I'm using Apache2::Resource, adding t