Torsten Foertsch wrote:
Try $r->push_handlers(PerlFixupHandler => 'My::fixup_handler').
Doesn't make any difference unfortunately. Strange though, when I use
the hard reference \&fixup_handler, the lookup-errors (which are
sometimes corrupted) in the log still display as My::fixup_handler.
On Tuesday 12 June 2007 17:40, Jani M. wrote:
> After a few moments of trial and error, I tracked the problem down to
> $r->push_handlers(PerlFixupHandler => \&fixup_handler). The segfaults
> start even if this is the only thing the original postreadrequesthandler
> does, and even if the fixuphandl
Jani M. wrote:
Jani M. wrote:
After a few moments of trial and error, I tracked the problem down to
$r->push_handlers(PerlFixupHandler => \&fixup_handler). The segfaults
start even if this is the only thing the original postreadrequesthandler
does, and even if the fixuphandler does absolutely
Jani M. wrote:
I've still been trying to identify the root cause of the crashes, but
unfortunately haven't had success yet.
Looks like success at last. I managed to tune my test environment server
load from <30 requests/s to 500+ requests/s. This in turn started
producing segfaults in the tes
Jani M. wrote:
As far as I have seen, all the segfaults seem to be mgv related, but not
at the exact same spot.
I've still been trying to identify the root cause of the crashes, but
unfortunately haven't had success yet. Some of the things I've tried so far:
- Different PerlInterpScope setti
On Mon, 4 Jun 2007 14:30:01 -0400
"Perrin Harkins" <[EMAIL PROTECTED]> wrote:
> > I know what you mean, but the problem here is that this mod_perl
> > server *is* the reverse proxy :) There are several backend servers
> > which this server will both proxy and cache the content for -
> > mod_perl i
Try "PerlInterpScope request" and see if the coredumps disappear. Further
check if it's always this place.
PerlInterpScope request did not help. I just tested, and still got a
segfault.
As far as I have seen, all the segfaults seem to be mgv related, but not
at the exact same spot.
One odd
On Wednesday 06 June 2007 10:55, Jani M. wrote:
> (gdb) btt 4
> [Switching to thread 4 (process 2133)]#0 0xb77eb15a in
> modperl_mgv_as_string (my_perl=0x8662c58, symbol=0x8178190, p=0x8938438,
> package=0) at modperl_mgv.c:399
> 399 modperl_mgv.c: No such file or directory.
> in
Perrin Harkins wrote:
> Honestly, the person who has done the most work on debugging thread
> crashes is Torsten. His advice on how to debug it will be better than
> mine. It does seem like people usually solve them by using backtrace
> analysis though.
Getting back to this, I've now had time t
On 6/4/07, Jani M. <[EMAIL PROTECTED]> wrote:
That's unfortunately exactly the problem. It's not authentication, but
the mod_perl bits are still required before proxying can be done.
The scenario I was referring to is where your front-end proxy is not
running mod_perl, and it just sends the req
Perrin Harkins wrote:
If what you're saying is that you can't separate out the mod_perl bits
with a proxy because they do things like authentication, you might be
That's unfortunately exactly the problem. It's not authentication, but
the mod_perl bits are still required before proxying can be
On 6/4/07, Issac Goldstand <[EMAIL PROTECTED]> wrote:
I'm a bit confused here... Perrin, isn't what Jani is mentioning here
exactly what Stas wanted to accomplish (well, one specific detail of
what he wanted to accomplish) with mp2, with the specific result in mind
of eliminating the common Apac
I'm a bit confused here... Perrin, isn't what Jani is mentioning here
exactly what Stas wanted to accomplish (well, one specific detail of
what he wanted to accomplish) with mp2, with the specific result in mind
of eliminating the common Apache 1 issue of using the 2 backend
(mp/static) with a sin
On 6/4/07, Jani M. <[EMAIL PROTECTED]> wrote:
With prefork, running ~ 600 processes results in roughly 700MB of memory
consumption. This obviously gives us 600 client "download slots", and
600 perl interpreters, and leaves some memory for other processes and
disk caching.
With the threaded worke
Torsten Foertsch wrote:
These figures match my observations. It also means that you are not using big
content generators in the response phase like Catalyst.
Correct. Response content is generated by backend servers, which is then
delivered to clients either from the local cache, or via mod_pr
On Monday 04 June 2007 18:43, Jani M. wrote:
> For comparison, a server with one gig of memory can run roughly 600
> preforked workers with some spare memory left for disk caching. Any more
> than this, and performance drops as disk access increases.
These figures match my observations. It also me
Perrin Harkins wrote:
No, it's just the opposite. Using prefork won't save CPU, but it will
save memory, meaning you can run more perl interpreters.
I'm sorry, but I have to disagree here - especially with the
configuration I need. Let's take the server with 1 GB of memory as an
example.
W
On 6/4/07, Jani M. <[EMAIL PROTECTED]> wrote:
You are correct in that performance (cpu usage) might be worse with the
threaded workers. However, scalability is definately much better.
No, it's just the opposite. Using prefork won't save CPU, but it will
save memory, meaning you can run more pe
Perrin Harkins wrote:
Are you sure about that? On Linux, prefork is likely to perform
better. Threads will use up a lot more memory, and be slow to spawn.
I think you should try a little benchmarking before you put a lot more
time into using threads.
You are correct in that performance (cpu u
On 6/3/07, Jani M. <[EMAIL PROTECTED]> wrote:
The problems start only when two or more interpreters per
process is running - unfortunately, this is exactly what I would need
for scalability and performance.
Are you sure about that? On Linux, prefork is likely to perform
better. Threads will u
Hi all,
I have a problem with segmentation faults which I've been unsuccessful
at solving so far. These only occure when running the handler+filter in
a threaded worker-mpm setup, a preforked environment does not have any
problems.
First, the environment. I am running on Debian Etch, with Ap
21 matches
Mail list logo