On Wednesday, November 19, 2014 11:02:00 AM UTC-6, Tim.Mooney wrote:
>
>
> All- 
>
> For those of you that are using puppet on RHEL 6.x (/CentOS/Oracle 
> Linux/Scientific Linux/etc.) and have experienced ruby segfaults on 
> your puppet master(s), what workaround or workarounds have you been 
> using? 
>

I found that adding the following to config.ru stops the problem.
ARGV << "--profile"
I am presuming this is because the --profile option is incrementing the 
reference count to all the objects in memory for a single catalog compile 
and prevents the ruby GC bug from thinking something is no longer needed.

I saw this problem only on our large catalogs, I presume that was because 
the large memory requirement of the catalog triggered the ruby GC.

Another option that will be crazy memory intensive is to disable GC in 
config.ru
GC.disable

This worked fine on our test machines, but the only way to make it 
sustainable in production is to set a really low PassengerMaxRequests.  
This wasn't a good option for us in production either, as it removed the 
performance gains of hiera file caching.  If you don't have a lot of 
parameterized classes or your Puppet Masters have a lot of capacity, 
disabling GC and lowering PassengerMaxRequests might be an option, but I 
still think "--profile" is the better way to go.

I found a bugzilla on the GC bug, don't have the number handy, but it 
basically said that GC in ruby 1.8.7 has a flawed design and this can't be 
fixed.  So the only real way to avoid all the headaches this bug can cause 
is repackage to ruby193 or upgrade to Red Hat 7.

Hope this helps,
John


-- 
You received this message because you are subscribed to the Google Groups 
"Puppet Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to puppet-users+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/puppet-users/dd475be7-9845-4d2a-8dcd-c0b049c7dee5%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Reply via email to