On 25 Jan 2012, at 13:41, Octavian Rasnita wrote:
Is closing and starting starman now and then the only solution?
No, and I didn't suggest that at any point.
You ask starman to restart child processes after N requests. This is
entirely different to restarting the entire Starman (as there is no
interruption in service - one worker quits after finishing handling
it's current request).
And memory leakage aside - this is the only possible way to maximise
memory sharing - as pages which were shared with the parent _will_ get
un-shared as you continue processing requests - there isn't anything
you can do about this, it's just how copy on write works...
(As perl is interpreted, your perl code lives in 'data' pages, rather
than executable pages - so you don't and can't get the same memory
sharing you can in C - where the code pages are always shared with
your child processes - as you're executing the same program...
Or, rather - you do get exactly the same semantics - the perl binary
itself, and any shared objects (.so files) you have loaded in the
parent will be shared with children forever - but this is generally a
small proportion of your memory use, compared to your perl code and
data structures, which all live in 'data' pages - meaning that your
program running causes static perl code in data pages to become
unshared..
Cheers
t0m
_______________________________________________
List: [email protected]
Listinfo: http://lists.scsys.co.uk/cgi-bin/mailman/listinfo/catalyst
Searchable archive: http://www.mail-archive.com/[email protected]/
Dev site: http://dev.catalyst.perl.org/