I ran similar tests to what Torsten had, and definitely noticed a
difference in memory usage, but didn't see any leaks in memory. Memory
usage is about twice when using $r->print, but it does hit it's max fairly
quickly. Memory usage also maxes out fairly quickly using the bucket
brigade method.
On Fri, May 28, 2010 at 7:00 AM, Nishikant Kapoor wrote:
> http://www.gossamer-threads.com/lists/modperl/modperl/101225, I came up with
> the following sub to display the output but the memory usage continues to
> build up without limit when web pages are rendered. Any suggestion/pointer
> would b
Hello,
Following the thread
http://www.gossamer-threads.com/lists/modperl/modperl/101225, I came up
with the following sub to display the output but the memory usage
continues to build up without limit when web pages are rendered. Any
suggestion/pointer would be greatly appreciated.
Here is
On Friday 19 March 2010 22:07:39 André Warnier wrote:
> In one of your initial posts, you mentioned sending a response with a
> Content-type "multipart/x-mixed-replace".
> What does that do exactly ?
> A pointer would be fine.
>
http://en.wikipedia.org/wiki/Push_technology#HTTP_server_push
http:/
Pavel Georgiev wrote:
Thanks, that did the job. I'm currently testing for side effects but it all
looks good so far.
Glad someone could help you.
I have been meaning to ask a question, and holding back.
In one of your initial posts, you mentioned sending a response with a
Content-type "multip
Thanks, that did the job. I'm currently testing for side effects but it all
looks good so far.
On Mar 18, 2010, at 4:09 AM, Torsten Förtsch wrote:
> On Thursday 18 March 2010 11:54:53 Mårten Svantesson wrote:
>> I have never worked directly with the APR API but in the example above
>> couldn't y
On Thursday 18 March 2010 11:54:53 Mårten Svantesson wrote:
> I have never worked directly with the APR API but in the example above
> couldn't you prevent the request pool from growing by explicitly reusing
> the bucket brigade?
>
> Something like (not tested):
>
> sub {
>my ($r)=...@_;
>
Torsten Förtsch skrev:
On Thursday 18 March 2010 04:13:04 Pavel Georgiev wrote:
How would that logic (adding subpools and using them) be applied to my
simplified example:
for (;;) {
$request->print("--$this->{boundary}\n");
$request->print("Content-type: text/html; charset=utf-8;\n\n");
On Thursday 18 March 2010 10:16:07 Torsten Förtsch wrote:
> No, this one does not grow here.
>
> sub {
> ...
forgot to mention that the function is supposed to be a PerlResponseHandler.
Torsten Förtsch
--
Need professional modperl support? Hire me! (http://foertsch.name)
Like fantasy? http://k
On Thursday 18 March 2010 04:13:04 Pavel Georgiev wrote:
> How would that logic (adding subpools and using them) be applied to my
> simplified example:
>
> for (;;) {
>$request->print("--$this->{boundary}\n");
>$request->print("Content-type: text/html; charset=utf-8;\n\n");
>$request-
On Mar 17, 2010, at 11:27 AM, Torsten Förtsch wrote:
> On Wednesday 17 March 2010 12:15:15 Torsten Förtsch wrote:
>> On Tuesday 16 March 2010 21:09:33 Pavel Georgiev wrote:
>>> for () {
>>>$request->print("--$this->{boundary}\n");
>>>$request->print("Content-type: text/html; charset=utf-8
On Wednesday 17 March 2010 12:15:15 Torsten Förtsch wrote:
> On Tuesday 16 March 2010 21:09:33 Pavel Georgiev wrote:
> > for () {
> > $request->print("--$this->{boundary}\n");
> > $request->print("Content-type: text/html; charset=utf-8;\n\n");
> > $request->print("$data\n\n");
> > $
2010/3/17 Torsten Förtsch :
> The httpd process grows slowly but unlimited. Without the rflush() it grows
> slower but still does.
>
> With the rflush() its size increased by 100MB for an output of 220MB.
> Without it it grew by 10MB for an output of 2.3GB.
>
> I'd say it's a bug.
I agree. This s
On 03/17/2010 05:15 AM, Torsten Förtsch wrote:
until( -e "/tmp/stop" ) {
$r->print(("x"x70)."\n");
$r->rflush;
}
Just for the record:
With mp1 there isn't any mem leak with or without rflush.
(After 10 mins: output 109GB. Fedora's 12 stock perl 5.10.0, apache
1.3.42, mod_perl
On Tuesday 16 March 2010 21:09:33 Pavel Georgiev wrote:
> for () {
> $request->print("--$this->{boundary}\n");
> $request->print("Content-type: text/html; charset=utf-8;\n\n");
> $request->print("$data\n\n");
> $request->rflush;
> }
>
> And the result is endless memory growth in th
André Warnier wrote:
Pavel Georgiev wrote:
Andre,
That is what I'm currently doing:
$request->content_type("multipart/x-mixed-replace;boundary=\"$this->{boundary}\";");
I don't think so. What you show above is a multipart message body,
which is not the same (and not the same level).
What
Pavel Georgiev wrote:
Andre,
That is what I'm currently doing:
$request->content_type("multipart/x-mixed-replace;boundary=\"$this->{boundary}\";");
I don't think so. What you show above is a multipart message body,
which is not the same (and not the same level).
What you are looking for is a
Andre,
That is what I'm currently doing:
$request->content_type("multipart/x-mixed-replace;boundary=\"$this->{boundary}\";");
and then each chuck of prints looks like this (no length specified):
for () {
$request->print("--$this->{boundary}\n");
$request->print("Content-type: text/html;
Pavel Georgiev wrote:
...
Let me make I'm understanding this right - I'm not using any buffers myself,
all I do is sysread() from a unix socked and print(),
its just that I need to print a large amount of data for each request.
...
Taking the issue at the source : can you not arrange to sys
Pavel
You're welcome. You are correct about the limitations of
Apache2::SizeLimit. Processes cannot be 'scrubbed'; rather they should
be killed and restarted.
Rapid memory growth should be prevented by prohibiting processes from
ever growing large than a preset limit. On Unix systems, the
Thank you both for the quick replies!
Arthur,
Apache2::SizeLimit is no solution for my problem as I'm looking for a way to
limit the size each requests take, the fact that I can scrub the process after
the request is done (or drop the requests if the process reaches some limit,
although my und
You could use Apache2::SizeLimit ("because size does matter") which
evaluates the size of Apache httpd processes when they complete HTTP
Requests, and kills those that grow too large. (Note that
Apache2::SizeLimit can only be used for non-threaded MPMs, such as
prefork.) Since it operates a
On Mon, Mar 15, 2010 at 11:26 PM, Pavel Georgiev wrote:
> I have a perl script running in mod_perl that needs to write a large amount
> of data to the client, possibly over a long period. The behavior that I
> observe is that once I print and flush something, the buffer memory is not
> reclaime
Hi,
I have a perl script running in mod_perl that needs to write a large amount of
data to the client, possibly over a long period. The behavior that I observe is
that once I print and flush something, the buffer memory is not reclaimed even
though I rflush (I know this cant be reclaimed back b
On 11 Sep 2004, at 20:35, Will Yardley wrote:
But businesses don't just let various random people off the street
write code and then load it on their servers.
Explain the success of PHP then. Apparently the cross-site holes
were tolerable for long enough...
Will Yardley (of DreamHost)
You guy suck
Hi Will,
Thanks for taking the time to respond. What bothered me about your
policy is that it accuses mod_perl of these things, and yet you offer
PHP as an apache module. PHP has plenty of security issues when you
don't run it as CGI, and you can certainly crash the server with it.
> I'll see i
On Mon, Aug 30, 2004 at 02:12:27PM -0400, Perrin Harkins wrote:
> On Sun, 2004-08-29 at 11:53, Martin RJ Cleaver wrote:
[ Got a pointer to this thread, so just responding to clear a few things
up ]
> > https://panel.dreamhost.com/kbase/index.cgi?area=2446 says:
> > "We do not support mod_perl on
>
> That's not entirely true. It is in fact the case that mod_perl's
> *upper-bound* on memeroy usage is similar to the equivalent script
> runnung as a cgi.
>
> A well designed mod_perl application loads as many shared libraries as
> possible before Apache forks off the child processes. This takes
Hi all,
Thank you all for your responses, I am getting a better picture now.
I guess my hosting provider's concern is that they have a lot of clients who have infrequently running scripts. By design, mod_perl keeps things in memory for longer so that subsequent calls do not incur a reload of t
On Mon, 2004-08-30 at 14:12, Perrin Harkins wrote:
> The truth is that mod_perl uses the same amount of memory that Perl CGI
> scripts use. The difference is that CGI scripts exit as soon as they
> finish. Serving 10 simultaneous requests with CGI requires the same
> amount of memory as it does
Gossamer-Threads has a pretty cool mod_perl setup. You can get a
dedicated server or a shared account. Each shared account has its own
apache process, which means that you can log in and control (i.e. stop /
start / restart) your own modperl httpd.
All the mod_perl processes are behind a small,
Martin RJ Cleaver wrote:
https://panel.dreamhost.com/kbase/index.cgi?area=2446 says:
"We do not support mod_perl on our shared hosting plans.
We're sorry about this; the problem is that mod-perl is just too
much of a memory hog to attach to Apache (the web serving software),
and it introduces a
We offer mod_perl support on shared hosting plans:
http://systame.com/html/web_hosting_plans.html
on 8/29/04 11:53 AM, Martin RJ Cleaver at [EMAIL PROTECTED] wrote:
> https://panel.dreamhost.com/kbase/index.cgi?area=2446 says:
>
> "We do not support mod_perl on our shared hosting plans.
>
Hub.org has mod_perl support, I've been very happy with them
-Original Message-
From: Martin RJ Cleaver [mailto:[EMAIL PROTECTED]
Sent: August 29, 2004 8:54 AM
To: [EMAIL PROTECTED]
Subject: Hosting provider disallows mod_perl - "memory hog / unstable"
https://pane
https://panel.dreamhost.com/kbase/index.cgi?area=2446 says:
"We do not support mod_perl on our shared hosting plans.
We're sorry about this; the problem is that mod-perl is just too
much of a memory hog to attach to Apache (the web serving software),
and it introduces a great deal of instabilit
35 matches
Mail list logo