We have a bit of a problem with how Linux handles huge pages when you run out of them.

I've been discussing it with Dmitry and he recently committed a way to disable Huge Pages in the main allocator to master:

https://github.com/php/php-src/commit/945a661912612cdecd221cd126feda7d4335c33c

Unfortunately it looks like Linux is very unhappy when a CoW happens to a huge page allocated in its parent process and it tries to re-map this page. If there are no available huge pages you get random seg faults/bus errors and everything blows up. I don't understand why Linux can't fall back to a regular page here in this case, but it doesn't. It seems like an OS bug to me, but nonetheless this affects mod_php/php-fpm since we have a huge page allocated before the fork, but it also affects any cli script that does a pcntl_fork(). As far as I can tell there is no way to fix it. We could try to make sure we never do a MAP_HUGETLB prior to the fork in mod_php/php-fpm but that still doesn't solve the pcntl_fork() case.

Anatol, we should merge Dmitry's patch into PHP-7.0 and further, I think we should flip the default to zend_mm_use_huge_pages = 0 in it.

If you have well-behaved code that never tries to allocate a lot of memory you can turn on huge pages, but I think we should default to safety here. Those SIGBUS storms are quite alarming and hard to figure out.

There have also been some other issues with huge pages as per these bugs:

   https://bugs.php.net/70984
   https://bugs.php.net/71272
   https://bugs.php.net/71355

This is very much an expert feature that we need to document better and people should know what they are stepping in when they turn this feature on.

-Rasmus

Reply via email to