Julia, did you ever figure this out? I'm running into this issue as well.
--david
On Tuesday, May 22, 2012 5:28:05 AM UTC-4, Julia Smith wrote:
>
> I'm trying to use the firewall resource and it works fine for me for
> iptables.
>
> However, I'm not sure how I purge ip6tables?
>
> doing...
>
I've recently upgraded from 2.6.9 to 3.0.1 and have noticed an oddity. Our
puppet agents are configured with a runinterval of 900 and a splaylimit of
450. Since upgrading I've noticed that once or twice a day our puppet
agents simply won't run for about an hour or so. Has anyone else
experience
Do they ever wake up on their own? I just posted about my issue where every
once in a while my agents will sleep for an hour even though they're
configured to run every 15 (+ 7.5 splay) minutes.
--david
On Wednesday, December 12, 2012 5:50:00 PM UTC-5, MasterPO wrote:
>
> I have 39 RHEL nodes
where that long select call comes from.
On Friday, December 14, 2012 3:15:25 PM UTC-5, David Mesler wrote:
>
> I've recently upgraded from 2.6.9 to 3.0.1 and have noticed an oddity. Our
> puppet agents are configured with a runinterval of 900 and a splaylimit of
> 450. Since upg
backwards. It should be “next_agent_run +=
new_run_interval – agent_run_interval”.
On Tuesday, December 18, 2012 7:00:42 PM UTC-5, David Mesler wrote:
>
> I've noticed when I strace a puppet agent that has failed to run after its
> 900 second runinterval, it's blocking on a real
end
On Thursday, December 20, 2012 4:09:09 PM UTC-5, David Mesler wrote:
>
> I’ve added some debugging messages to run_event_loop and figured out what
> was going on. I’m going to reference line numbers in
> https://github.com/puppetlabs/puppet/blob/master/lib/puppet/daemo
Hello, I'm currently trying to deploy puppetdb to my environment but I'm
having difficulties and am unsure on how to proceed.
I have 1300+ nodes checking in at 15 minute intervals (3.7 million
resources in the population). The load is spread across 6 puppet masters. I
requisitioned what I thoug
t; DROP INDEX idx_catalog_resources_tags_gin;
>
> It is easily restored if it doesn't help ... but may take some time to
> build:
>
> CREATE INDEX idx_catalog_resources_tags_gin
> ON catalog_resources
> USING gin
> (tags COLLATE pg_catalog."default");
&g
Resource duplication is 98.7%, catalog duplication is 1.5%.
On Tuesday, October 29, 2013 9:06:37 AM UTC-4, Ken Barber wrote:
>
> Hmm.
>
> > I reconfigured postgres based on the recommendations from pgtune and
> your
> > document. I still had a lot of agent timeouts and eventually after
> runn
ome more help troubleshooting the output,
> head over to #puppet on IRC [3] and one of the PuppetDB folks can help you
> out.
>
>
> 1 - https://projects.puppetlabs.com/issues/22977
> 2 - https://docs.puppetlabs.com/puppetdb/1.5/api/query/v3/catalogs.html
> 3 - http://pro
Currently I use the common import "nodes/*" method. With that I've always
had to restart the puppet master processes in order to process new node
files. Will that still be necessary with the new manifest directory
behavior?
--
You received this message because you are subscribed to the Google
I'm having an issue updating from 1.6.2.
2014-04-22 17:00:33,043 INFO [main] [cli.services] PuppetDB version 1.6.3
2014-04-22 17:00:33,124 ERROR [main] [scf.migrate] Caught SQLException
during migration
java.sql.BatchUpdateException: Batch entry 0 CREATE TABLE certnames (name
TEXT PRIMARY KEY)
migrations table
> look like? Here is an example of how to get a hold of that info:
> https://gist.github.com/kbarber/11196805
>
> ken.
>
>
> On Tue, Apr 22, 2014 at 11:10 PM, David Mesler
> >
> wrote:
> > I'm having an issue updating from 1.6.2.
13 matches
Mail list logo