Hey Folks,

I'm not on the list so please reply back to my email.

I have been having some problems with memory fragmentation in PHP when
working with large data structures.

Once a script finishes executing, although the garbage collector
technically has freed the ram, the pages are still allocated. The next
time the thread executes, there is a whole pile of swapping and if a
memory hungry script gets run again, the memory footprint increases.

Now consider what happens when I have 20 processes in an FCGI pool, each
one getting above 300MB in size, its not good.

My solution which I have implemented as part of the FCGI SAPI (thanks to
GeorgeS, hartmut, Pierre & Derick for their guidance) is the introduction
of two new environment variables, PHP_FCGI_MAX_RAM_MB &
PHP_FCGI_MAX_RAM_INCREASE.

PHP_FCGI_MAX_RAM_MB allows one to set a limit (in megabytes) of how large
an FCGI process is allowed to grow to.

PHP_FCGI_MAX_RAM_INCREASE allows one to set a maximum % increase compared
to when the process first started.

If either of these limits are exceeded, the process will be restarted
AFTER script execution has completed (unlike the --enable-memory-limit
stuff).

Please get back to me on:

1. your thoughts on this

2. an idea of how big an average PHP process should be (mine are 150MB
when they start, 160MB after running some average scripts, and grow to
above 300 MB when working with big data structures and complex object
caching code)

3. how do i get this back into the PHP source tree! (sorry newbie)

Regards,

ap.

--
PHP Internals - PHP Runtime Development Mailing List
To unsubscribe, visit: http://www.php.net/unsub.php

Reply via email to