1, 2018 at 5:52:10 PM UTC-5, Matt Wise wrote:
>>
>> *Puppet Agent: 5.3.2*
>> *Puppet Server: 5.1.4 - Packaged in Docker, running on Amazon ECS*
>>
>
> I'm running a docker-compose based puppet setup, and had the same
> problem. Short version was to increa
*Puppet Agent: 5.3.2*
*Puppet Server: 5.1.4 - Packaged in Docker, running on Amazon ECS*
So we've recently started rolling over from our ancient Puppet 3.x system
to a new Puppet 5.x service. The new service consists of a PuppetServer
Docker Image (5.1.4) running in Amazon ECS, and our hosts booti
e've launched in 3 years), and had no problems with the model.
We'll be adding some additional features to the API to support things
like automatic node deregistration in PuppetDB as well.
Matt Wise
Sr. Systems Architect
Nextdoor.com
On Fri, Dec 12, 2014 at 10:40 AM, Martijn wrote:
#x27; ... because
the provider/types get parsed regardless of whether or not we 'include
firewall').
Matt Wise
Sr. Systems Architect
Nextdoor.com
--
You received this message because you are subscribed to the Google Groups
"Puppet Users" group.
To unsubscribe from this gr
puppetdb?) to
purge themselves when they're being terminated?
Matt Wise
Sr. Systems Architect
Nextdoor.com
--
You received this message because you are subscribed to the Google Groups
"Puppet Users" group.
To unsubscribe from this group and stop receiving emails from it, send
+1
Matt Wise
Sr. Systems Architect
Nextdoor.com
On Mon, Dec 8, 2014 at 9:34 AM, Darin Perusich wrote:
> On Mon, Dec 8, 2014 at 5:01 AM, Ken Barber wrote:
> >> We have entirely-gem based Puppet masters (no Ubuntu packages installing
> >> Puppet)... we're trying to ad
Thanks for that Ken... This morning I found a gem 'md-puppetdb-terminus'
that someone has published that works perfectly, thankfully.
Matt Wise
Sr. Systems Architect
Nextdoor.com
On Mon, Dec 8, 2014 at 2:01 AM, Ken Barber wrote:
> > We have entirely-gem based Puppet masters (no
e're fixed now .. so I'll stop griping. :)
Matt Wise
Sr. Systems Architect
Nextdoor.com
On Wed, Aug 27, 2014 at 3:51 PM, Kylo Ginsberg wrote:
> On Tue, Aug 26, 2014 at 11:57 PM, Daniele Sluijters <
> daniele.sluijt...@gmail.com> wrote:
>
> Hey,
>>
>> I
Facter.
Matt Wise
Sr. Systems Architect
Nextdoor.com
On Wed, Aug 27, 2014 at 5:49 AM, Konrad Scherer <
konrad.sche...@windriver.com> wrote:
> On 08/26/2014 04:42 PM, Will Hopper wrote:
>
>> Hi, Mark!
>>
>>
>> Thanks for raising your concerns on this. Thi
The log shows the remote connecting IP -- but the IP is the ELB in front of
our puppet servers. Unfortunately because we're doing pure TCP-passthrough,
ELB logging itself is not useful either in this case. :/
Matt Wise
Sr. Systems Architect
Nextdoor.com
On Mon, Aug 25, 2014 at 2:08 PM,
Comments inline
Matt Wise
Sr. Systems Architect
Nextdoor.com
On Mon, Aug 25, 2014 at 6:55 AM, jcbollinger
wrote:
>
>
> On Saturday, August 23, 2014 12:46:59 PM UTC-5, Matt W wrote:
>>
>> Will,
>> Thanks for the response. I know its a bit of a unique model -- but
hard to tell where the requests for the node
information are coming from. That said, it feels odd that the puppet master
itself would reach out to its own Node API to get node information, rather
than just using the information passed in for the catalog request.
Matt Wise
Sr. Systems Arch
ested in knowing is why the
puppet-agents are pulling DOWN their "node information" from the puppet
masters? Is it possible that they do an upload of node information, then
ask for that information back, then somehow use the downloaded information
for their catalog request? I could see some
duction/node/nsp_node_prod? HTTP/1.1" 200 13834 "-" "-" 0.000
So, I have two questions ..
1. What is the purpose of calling the Node API? Is the agent doing this?
Why?
2. Is it possible that if an agent called the node api and got "its own
node information" that w
s a dpkg
lock in place, rather than outright failing? Overall its just a nuisance,
but we must get 3-5 of these reports a day..
Matt Wise
Sr. Systems Architect
Nextdoor.com
--
You received this message because you are subscribed to the Google Groups
"Puppet Users" group.
To unsubscribe f
of the guys on our team built a recursion-loop
in puppet that handles this fairly gracefully with standard package
resources. We have it working and have unit tests for it ... but we're
going to spend a week or two with it before we post it publicly on our
Engineering blog (engblog.nextdoor
Thanks... I ended up with this:
> #
> # my_puppet_doc.rb
> #
>
> require 'puppet/util/rdoc'
>
> module Puppet::Parser::Functions
> newfunction(:my_puppet_doc, :type => :rvalue, :doc => <<-EOS
>
> This function returns the 'puppet doc' header from the module
> that called it. Usage:
>
> $doc
On Feb 22, 2013, at 10:35 AM, Matt Wise wrote:
> Yeah, this is interesting... it will essentially report where the template
> file came from. It doesn't get me the path name to the manifest that called
> it, which is what i'm looking for.
>
> On Feb 22, 2013, at 10
Yeah, this is interesting... it will essentially report where the template file
came from. It doesn't get me the path name to the manifest that called it,
which is what i'm looking for.
On Feb 22, 2013, at 10:24 AM, Eric Sorenson
wrote:
> Jordan Sissel wrote up a little thing to do this:
>
I've got a few puppet servers running behind Nginx, load balanced with an ELB.
I occasionally see this error in bursts..
>
> Tue Jul 24 09:41:23 + 2012 Puppet (err): Could not retrieve catalog from
> remote server: end of file reached
> Tue Jul 24 09:41:24 + 2012 Puppet (err): Could not
Is anybody using mod_cache/mod_disk_cache with Puppet? I found a post talking
about it here (http://paperairoplane.net/?p=380) and I tried to implement it ..
but I found that nothing was being cached. Near as I can tell, Apache refuses
to cache any URL that has a query-string attached to it:
(
I'm looking for a bit of best-practices here. Our puppet environment
up-until-today has been owned and operated by IT Operations only. We've had a
single 'production' environment and our code has been managed in a local
GitHub::FI install. We have ~14,000 lines of code in our PP files. We're try
Thats an interesting one for a few points.. how is the uniqueid generated?
On May 12, 2011, at 6:15 PM, Larry Ludwig wrote:
> 4)
>
> reference the file via the facter 'uniqueid'
>
>
>
> --
> You received this message because you are subscribed to the Google Groups
> "Puppet Users" group.
>
Just a quick question about the Puppet ACL system.. If "hostA" gets a catalog
that says "download puppet:///passwd", I assume that hostA can always receive
puppet:///passwd. However, what about hostB? Can hostB make an arbitrary call
to the puppet master requesting "puppet:///passwd" even if its
unds like now I have to worry about certs
> eventually expiring and regenerate/sign them to keep nodes happy?
>
> Seems Trevor suggests increasing TTL. How can I do this if I wanted
> to?
>
> Thanks,
> Jake
>
> On Apr 28, 9:30 am, Matt Wise wrote:
>> Unfortuna
Unfortunately, this is still a 'missing feature' of Puppet IMO. I applaud
Foreman for adding it as functionality though in their own code. For our
situation, we ended up writing our own CGI script on the Puppet CA servers as
well as a client-side script that runs periodically on the clients to v
($client_class, $certname)
> }
>
> If you need to allow the client to client set it later on, you can
> either manually update the server side settings or just write a
> wrapper for puppet cert --clean that will remove the ENC or extlookup
> data for this client.
>
>
tes..
—Matt
On Apr 26, 2011, at 6:50 AM, Matt Wise wrote:
> :) This is most definitely a hack. The issue is that once you start using
> Puppet to push out secure-data that HostA might need, but HostB should never
> be able to get — you run into this problem. If HostB is broken i
through these
fields and determine if they're valid. IE, perhaps in auth.conf? Or maybe
theres a way to use a 'prerun' command in puppet.conf that I can feed the
clients certificate to? Any thoughts there?
—Matt
On Apr 26, 2011, at 2:54 AM, Jeff McCune wrote:
> On Tue, Apr 26, 2011 a
I'm working out some security issues here and wanted to throw something out
there... I'll be digging in tonight to see whether something like this is
possible, so I'd appreciate feedback quickly if anyone happens to know if this
is possible. Imagine a scenario where our individual hosts actually
19, 2011, at 8:14 AM, Matt Wise wrote:
> Ok, for what its worth, I think I solved this a while back.. but I ran into
> it again the other day, and couldn't remember the fix. I found this thread
> again while searching around — so I figure I should update it so that
> everyon
r rebuilding its DB files from scratch seems to solve this problem.
—Matt
On Mar 16, 2011, at 8:46 PM, Daniel Pittman wrote:
> On Wed, Mar 16, 2011 at 20:16, Matt Wise wrote:
>
>> I've got a handful of nodes (3?) out of about 400 that are giving me
>> grief... puppet
I've got a handful of nodes (3?) out of about 400 that are giving me grief...
puppet will run either manually or in the service mode. However, in the service
mode the puppet process dies after an hour or so. I got an strace of the
failure:
rt_sigprocmask(SIG_BLOCK, NULL, [], 8) = 0
rt_sigprocm
I'm not sure if this made it out to the list or not ... I don't see it on the
web page, or in my email. Re-sending just in case. Sorry for the spam.
On Feb 7, 2011, at 2:22 PM, Matt Wise wrote:
> I've got a 4-server puppetmaster farm setup.. one server (master100) is the
&g
I'm not sure if this made it to the mailing list or not? I didn't see it go
out, and don't see it on the web page...
On Feb 7, 2011, at 7:09 AM, Matt Wise wrote:
> I'm working on a system for auto-resigning certificates for our clients and
> Iv'e basically got
I've got a 4-server puppetmaster farm setup.. one server (master100) is the ca
master, all the others are just compile-time hosts. I was experimenting with
the facts_terminus = rest options and the inventory service, and found that as
soon as I turned on facts_terminus = rest on say master101, t
I'm working on a system for auto-resigning certificates for our clients and
Iv'e basically got it working .. but I notice that Puppet uses an inventory
file and a serial # file that seem to be differently formatted than the openssl
toolkit uses? The serial number file that puppet generates has a
We use nsscache because nscd is so unreliable. Nsscache is simple enough that
it works, and it works pretty well. As Michael said, without it your system is
sending LDAP queries for almost every operation that uses getpw/getuser.
We do not see random LDAP failures from other processes on our sy
:10:23PM -0700, Matt Wise wrote:
>>
>> On the systems we use 'files db ldap' as our nsswitch.conf priority,
>> and 'db' is a local copy of the ldap data using 'nsscache' on a
>> regular basis. Looking up a user should never fail and it doesn
We have an environment where we have to place some files on systems owned by
'ldap' users... that is, users that are not local, but are held in LDAP. We've
done everything we can to stabalize our LDAP environment, but we still run into
an issue where hosts randomly pop out failures like:
err
g each run, and dumping it out in a
hash format of some kind is the best thing for our use-case.
—Matt
On Oct 22, 2010, at 9:50 AM, Dennis Hoppe wrote:
> Hello Richard,
>
> Am 22.10.2010 02:41, schrieb Richard Crowley:
>> On Thu, Oct 21, 2010 at 5:17 PM, Matt Wise wrote:
>>&
I have a scenario where I'd like to pull in a hash table from an external file
(really, a generate() function.. but for testing purposes, a file will do)...
is there any way to do that?
—Matt
--
You received this message because you are subscribed to the Google Groups
"Puppet Users" group.
T
I'd really like to see puppet-dashboard do this dynamically show you these
graphs by pointing puppet dashboard to a local copy of your puppet configs...
Thoughts?
On Oct 20, 2010, at 11:39 AM, Mohit Chawla wrote:
> You can do that by enabling graphs to be generated, in puppet.conf or as an
> a
I have a scenario where I'd like to pull in a hash table from an external file
(really, a generate() function.. but for testing purposes, a file will do)...
is there any way to do that?
—Matt
--
You received this message because you are subscribed to the Google Groups
"Puppet Users" group.
T
44 matches
Mail list logo