On 19/05/2015 00:07, Christoph Becker wrote:
As Rasmus suggested[1], in an issue linked through #60982, a simple
>way to solve this would be to have both a soft and hard limit for
>memory, and to allow the soft-limit to be a user callback, which could
>call gc_collect_cycles, or not as the user desired.
What happens if the soft limit is exhausted, but the GC can free only a
little memory?  That might trigger the GC shortly afterwards again and
again.  A user would have to carefully adjust the soft limit
dynamically, to work around this problem.  Then again, it might be
better than the current situation.

Apart from chewing CPU cycles, would that actually be a problem? The soft-limit callback would presumably be responsible for doing one of two things: gracefully terminating the request (the "pretty error page" use case) or reducing the used amount of memory (the "trigger GC" use case). If the memory usage was still over the soft limit when the callback ends, the engine could terminate as though the hard limit was reached. Rather than adding a soft limit, you could say that we are adding an additional reserve of memory only accessible to the memory-out callback.

If the callback freed at least some memory, the engine could execute at least one instruction before the soft limit was reached a second time, triggering the callback again. In the worst case, a loop could repeatedly push usage over the limit, triggering the callback repeatedly like a tick function; however, one of the following would then have to happen:

- the oscillation could continue for a while, but the loop or the whole program eventually finish normally, just a bit slower than normal - if the net amount of memory freed by the callback was slightly higher than the net amount allocated after it returned, the memory usage would slowly decline, breaking out of the oscillation once it dropped below the soft limit - if the amount of memory freed was lower than the amount allocated, the memory usage would slowly grow, eventually reaching the hard limit - if the amount of memory freed and allocated was consistently identical, the oscillation could continue and cause the program to hit the execution timeout

Dynamically adjusting the limit would be no help, because if you're that hard up against the limit, your only hope is to gracefully end the process anyway.

--
Rowan Collins
[IMSoP]


--
PHP Internals - PHP Runtime Development Mailing List
To unsubscribe, visit: http://www.php.net/unsub.php

Reply via email to