Another good idea... :)

But I am transfixed by this problem...  I can't seem to get each forked
apache server to have both a shared global hash between all cloned
interpreters, *and* one thread in each process that runs in the
background doing housekeeping.  I can think of numerous things that this
would be useful for.

I know I am close, but I can't seem to quite grasp what I am missing.  I
thought PerlChildInit's were called for each forked child from it's
first/main interpreter (the one that all the others are cloned from).


On Mon, 2005-01-17 at 13:59 -0500, Perrin Harkins wrote:
> On Mon, 2005-01-17 at 11:25 -0500, Richard F. Rebel wrote:
> > Unfortunately, it's high volume enough that it's no longer possible to
> > keep these counters in the databases updated in real time.  (updates are
> > to the order of 1000's per second).
> 
> I would just use BerkeleyDB for this, which can easilly keep up, rather
> than messing with threads, but I'm interested in seeing if your
> threading idea will work well.
> 
> > * A overseer/manager thread that wakes up once every so often and
> > updates the MySQL database with the contents of the global shared hash.
> 
> Rather than doing that, why not just update it from a cleanup handler
> every time the counter goes up by 10000 or so?  Seems much easier to me.
> 
> - Perrin
> 
-- 
Richard F. Rebel

cat /dev/null > `tty`

Attachment: signature.asc
Description: This is a digitally signed message part

Reply via email to