You might want to have a target for memory per worker that you tune for. In Puppet 5, if you are using the default JRuby 1.7 implementation, you might start at 1/2G per JRuby on a small test box and go up in 1/4G increments seeing how your average catalog performance changes. I want to say most folks using JRuby 1.7 use between 0.5 and 1 G per instance (though obviously some are going below or above that).
If you're using the optional JRuby 9k support in Puppet 5 or you've upgraded to Puppet 6 where we use 9k by default then you will probably want to start with a higher amount of memory to start out with (3/4 or 1G). With 9k you'll also want to make sure you have plenty of off heap CodeCache and Metaspace available. I want to say with 9k I usually see folks with 0.75 to 1.5G and up to 100M of CodeCache per instance. You'll basically want your memory to be large enough per jruby to hold a catalog request in it w/o triggering a GC, though the more memory you have allocated the longer the GC pauses. Besides longer GC pause times there's some things to keep in mind when scaling vertically. After 32G of heap the JVM changes how it manages pointers and the size of every object increases. Since every object is bigger you need more heap per instance than you would on heaps smaller than 32G. You may need 120-150% more per instance when using heaps larger than 32G. If you're using 9k then there is a limit to the size of the CodeCache (2Gb, iirc) that will effectively limit how many instances you can have per box. CPU-wise a JRuby worker instance per [v]CPU is probably a good place to start. And regardless of box size, you should keep max-requests-per-instance disabled. And while doing this it will really help to have meaningful metrics reported. You might want to look into puppet_metrics_dashboard or puppet_metrics_collector modules on the forge for a basic setup to get started quickly. HTH, Justin On Fri, Jun 12, 2020 at 7:04 AM Nerbolff <tutelacooldo...@gmail.com> wrote: > Hello, community, > > > I wonder if my setup is properly thought out. > I've got a 4000+ instance to puppetize. and several puppetserver are > available. > > > - Ubuntu 18.04.3 LTS \n \l > - puppetserver version: 5.3.10 > > > here the memory location based to 128Go installed on the machine: > $ grep JAVA_ARGS /etc/default/puppetserver > JAVA_ARGS="-Xms64g -Xmx64g > -Djruby.logger.class=com.puppetlabs.jruby_utils.jruby.Slf4jLogger" > > $ grep MemTotal /proc/meminfo > MemTotal: 131736260 kB > > based to number of 20 core available hyperthreding is OFF. > $ sudo grep max-active-instances > /etc/puppetlabs/puppetserver/conf.d/puppetserver.conf |grep -v defau > max-active-instances: 16 > > $ lscpu | egrep 'Model name|Socket|Thread|NUMA|CPU\(s\)' > CPU(s): 20 > On-line CPU(s) list: 0-19 > Thread(s) per core: 1 > Socket(s): 2 > NUMA node(s): 2 > Model name: Intel(R) Xeon(R) CPU E5-2630L v4 @ 1.80GHz > NUMA node0 CPU(s): 0,2,4,6,8,10,12,14,16,18 > NUMA node1 CPU(s): 1,3,5,7,9,11,13,15,17,19 > > > Any advice/comments will be appreciated. > > > > Thanks > N. > > -- > You received this message because you are subscribed to the Google Groups > "Puppet Users" group. > To unsubscribe from this group and stop receiving emails from it, send an > email to puppet-users+unsubscr...@googlegroups.com. > To view this discussion on the web visit > https://groups.google.com/d/msgid/puppet-users/d9e58408-f6df-4459-b540-30849ff5715ao%40googlegroups.com > <https://groups.google.com/d/msgid/puppet-users/d9e58408-f6df-4459-b540-30849ff5715ao%40googlegroups.com?utm_medium=email&utm_source=footer> > . > -- You received this message because you are subscribed to the Google Groups "Puppet Users" group. To unsubscribe from this group and stop receiving emails from it, send an email to puppet-users+unsubscr...@googlegroups.com. To view this discussion on the web visit https://groups.google.com/d/msgid/puppet-users/CA%2B%3DBEqUgRbwyd%2BQdDzwcdTYCkQkb968U4XAgxqkwOx8x8OU%2BKA%40mail.gmail.com.