Hi,

I add the same issue a couple of months ago. The catalog compilation time
of the puppet server was taking more and more time for no reason. I added
more Puppet server to take the load, but the load on that actual "first"
server could not go back to a lower execution time.

After some troubleshooting, I've noticed that there was one file growing
and growing and that took all the resources of the server :

/opt/puppetlabs/puppet/cache/state/state.yaml

This file was getting huuuuuuuge, filled with report file entries.
I truncated this file and then voilĂ , everything was back in good shape.

I then added this to my puppetserver manifest :

  # let's remove growing state.yaml if it becomes too big
  tidy {'/opt/puppetlabs/puppet/cache/state/state.yaml':
    size => '10m',
  }

Have a look if you suffer the same issue ...

Cheers

Yvan B

On Thu, Feb 6, 2020 at 11:51 AM Martijn Grendelman <
martijngrendel...@gmail.com> wrote:

> Hi,
>
> A question about Puppetserver performance.
>
> For quite a while now, our primary Puppet server is suffering from severe
> slowness and high CPU usage. We have tried to tweak its settings, giving it
> more memory (Xmx = 6 GB at the moment) and toying with the
> 'max-active-instances' setting to no avail. The server has 8 virtual cores
> and 12 GB memory in total, to run Pupperserver, PuppetDB and PostgreSQL.
>
> Notably, after a restart, the performance is acceptable for a while
> (several hours, up to a almost day), but then it plummets again.
>
> We figured that the server was just unable to cope with the load (we had
> over 270 nodes talking to it in 30 min intervals), so we added a second
> master that now takes more than half of that load (150 nodes). That did not
> make any difference at all for the primary server. The secondary server
> however, has no trouble at all dealing with the load we gave it.
>
> In the graph below, that displays catalog compilation times for both
> servers, you can see the new master in green. It has very constant high
> performance. The old master is in yellow. After a restart, the compile
> times are good (not great) for a while.The first dip represents ca. 4
> hours, the second dip was 18 hours. At some point, the catalog compilation
> times sky-rocket, as does the server load. 10 seconds in the graph below
> corresponds to a server load of around 2, while 40 seconds corresponds to a
> server load of around 5. It's the Puppetserver process using the CPU.
>
> The second server, the green line, has a consistent server load of around
> 1, with 4 GB memory (2 GB for the Puppetserver JVM) and 2 cores (it's an
> EC2 t3.medium).
>
>
>
> If I have 110 nodes, doing two runs per hour, that each take 30 seconds to
> run, I would still have a concurrency of less than 2, so Puppet causing a
> consistent load of 5 seems strange. My first thought would be that it's
> garbage collection or something like that, but the server plenty of memory
> (OS cache has 2GB).
>
> Any ideas on what makes the Puppetserver starting using so much CPU? What
> can we try to keep it down?
>
> Thanks,
> Martijn Grendelman
>
> --
> You received this message because you are subscribed to the Google Groups
> "Puppet Users" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to puppet-users+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/puppet-users/0f6e7373-404a-45fd-8bc7-5daed3fa67f3%40googlegroups.com
> <https://groups.google.com/d/msgid/puppet-users/0f6e7373-404a-45fd-8bc7-5daed3fa67f3%40googlegroups.com?utm_medium=email&utm_source=footer>
> .
>

-- 
You received this message because you are subscribed to the Google Groups 
"Puppet Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to puppet-users+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/puppet-users/CAMuuXv38QvpA6vfpYE5wyfLyxZT296ASV41j519e9Lf4GqBC1A%40mail.gmail.com.

Reply via email to