One of my associates suggested this patch. The idea is to reduce
loop_sleep if we had to spawn a lot of children last time through the
loop. This way on restart (since $created_children is initialized to
$idle_children), and on sudden concurrency spikes, we loop quickly
enough to spawn the ch
Jared Johnson wrote:
Why not set --idle-children at start-up to something higher (or just 0
to disable)?
Setting it higher is a bit bothersome because our QP children use too
much memory right now (mainly from using DBIx::Class), so it would be a
bit unfortunate to have more memory-consuming
Why not set --idle-children at start-up to something higher (or just 0
to disable)?
Setting it higher is a bit bothersome because our QP children use too
much memory right now (mainly from using DBIx::Class), so it would be a
bit unfortunate to have more memory-consuming kids around that aren'
Diego d'Ambra wrote:
[...]
But you're right, there is also code in the reaper function - remove the
array of children terminated, hmmm... I think we should delete that.
This can't be deleted - parent is using this to track children, clean-up
and possible reset of shared memory.
--
Best r
Robert Spier wrote:
Diego d'Ambra wrote:
Charlie Brady wrote:
On Fri, 29 May 2009, Diego d'Ambra wrote:
[...]
Latest version of prefork also handles a possible race better, the
parent will detect a lock and reset shared memory.
Sorry, I've to correct myself, that's not true. Apparently my
p
Jared Johnson wrote:
Even if you're not near max children the parent will only spawn max idle
children, then it sleeps until an event, wake-up and see if more
children are needed. Debug log should give some clue, if this is the
reason.
Bingo. Looking further into things, it was apparent that a
Even if you're not near max children the parent will only spawn max idle
children, then it sleeps until an event, wake-up and see if more
children are needed. Debug log should give some clue, if this is the
reason.
Bingo. Looking further into things, it was apparent that a freshly
restarted no
Diego d'Ambra wrote:
>
> Charlie Brady wrote:
> > On Fri, 29 May 2009, Diego d'Ambra wrote:
> [...]
> >>
> >> Latest version of prefork also handles a possible race better, the
> >> parent will detect a lock and reset shared memory.
> >
>
> Sorry, I've to correct myself, that's not true. Appare
Charlie Brady wrote:
On Fri, 29 May 2009, Diego d'Ambra wrote:
[...]
Latest version of prefork also handles a possible race better, the parent
will detect a lock and reset shared memory.
Sorry, I've to correct myself, that's not true. Apparently my previously
suggested changes didn't mak
On Fri, 29 May 2009, Diego d'Ambra wrote:
Jared Johnson wrote:
> What's orphaned is not a child process, but a shared mem hash record for
> a
> process which no longer exists. I suspect that code is racy.
Hrm, then if we're getting a whole lot of these, does this mean child
processes ar
Jared Johnson wrote:
What's orphaned is not a child process, but a shared mem hash record for a
process which no longer exists. I suspect that code is racy.
Hrm, then if we're getting a whole lot of these, does this mean child
processes are going away at a high rate? I wouldn't expect such a
Inbound and outbound email scanned for spam and viruses by the
DoubleCheck Email Manager v5: http://www.doublecheckemail.com
Do we have to be exposed to this spam?
*blush*
Since I administer our qp installation that adds those, I suppose I
could exempt myself without anybody noticing :)
-J
On Fri, 29 May 2009, Jared Johnson wrote:
What's orphaned is not a child process, but a shared mem hash record for a
process which no longer exists. I suspect that code is racy.
Hrm, then if we're getting a whole lot of these, does this mean child
processes are going away at a high rate?
On Fri, 2009-29-05 at 11:47 -0500, Larry Nedry wrote:
> Hey Guy,
Better to CC the list.
>
> I'd like a copy of your script please.
http://p6.hpfamily.net/myTune
Enjoy. This version is public domain but it'll probably be GPL/Artistic
if I find time to improve it.
I have a different email addr
What's orphaned is not a child process, but a shared mem hash record for a
process which no longer exists. I suspect that code is racy.
Hrm, then if we're getting a whole lot of these, does this mean child
processes are going away at a high rate? I wouldn't expect such a
condition using prefo
On Thu, 28 May 2009, Jared Johnson wrote:
We're experiencing some strange issues and have been looking at
qpsmtpd-prefork's output with $debug set. We're getting a whole lot of lines
like this:
orphaned child, pid: 1285 removed from memory at /usr/bin/qpsmtpd-prefork
line 598.
...
Any id
On Fri, 2009-29-05 at 08:26 -0500, Jared Johnson wrote:
> The basic problem we've been encountering is that very rarely, all of
> our dozen QP nodes inexplicably introduce long delays before answering
> with a banner (no banner delay involved); watching the logs, it doesn't
> look like any child
No I don't think it's normal. What are you doing in your plugins? Wasn't
there some issue we uncovered a while ago to do with MySQL?
I don't recall hearing of issues with mysql.. we use postgres here and
do, well, lots of stuff: lookups for for global and ip-based max
concurrency rules in hoo
On Thu, 28 May 2009, Jared Johnson wrote:
orphaned child, pid: 1285 removed from memory at /usr/bin/qpsmtpd-prefork
line 598.
[snip]
Is this expected behavior? Note that since we have some customizations to
qpsmtpd-prefork and plenty of other forked code, I'm not necessarily ready to
call
We're experiencing some strange issues and have been looking at
qpsmtpd-prefork's output with $debug set. We're getting a whole lot of
lines like this:
orphaned child, pid: 1285 removed from memory at
/usr/bin/qpsmtpd-prefork line 598.
Where line 598 in my particular (slightly modified) ver
20 matches
Mail list logo