I had the same issue too, so I had to make a few changes on my virtualhost
to work. Try changing your Options to None and confirm that your
PassengerRoot and module path are correct.
I'm attaching my working virtualhost for you to compare:
LoadModule passenger_module /usr/lib/apache2/modules/mod
Hi guys.
I don't see the latest puppet 3.0 on the puppetlabs debian repository for
lenny and also the latest puppet dashboard.
Aren't you guys adding the new versions to the deprecated Debian Lenny
anymore ?
Can I grab the puppet 3.0 agent from squeeze to use on Lenny ?
Regards,
Felipe
--
Y
I'm trying to use the environment parameter on Exec but it is not working.
Any idea what's wrong ?
exec { 'test':
path=>
'/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin',
environment => ["HOME=test", "HOME2=test2"],
command=> "echo $HOME > /tmp/key",
u
o do to get this dependency met?
>
> Thanks for your help,
>
> Christian
>
> On Tuesday, October 9, 2012 12:29:22 PM UTC-6, Felipe Salum wrote:
>>
>> Hi guys.
>>
>> I don't see the latest puppet 3.0 on the puppetlabs debian repository for
>>
I just answered it in another thread :)
Install the Lenny Backports Repository, install puppet from there to bring
all dependencies and then install it again from puppetlabs. Lenny is a mess
:)
apt-get -t lenny-backports install -y puppet
apt-get -t puppetlabs install -y puppet
and the backpo
I would love to see puppet forge as we have the distribution repositories..
Modules audited, tested and maybe fixed by PuppetLabs and then officially
released on puppet forge under the puppetlabs account.
I see some scenarios that puppetlabs could consider:
1. puppetlabs is taking a lot of time
can make that a standard to move all my
other modules.
https://forge.puppetlabs.com/fsalum/newrelic
Regards,
Felipe
On Sunday, October 14, 2012 12:15:53 PM UTC-7, Christopher Wood wrote:
>
> On Sun, Oct 14, 2012 at 11:53:41AM -0700, Felipe Salum wrote:
> >I would love to see pup
Is this related to the same error I have when I run the puppet agent on my
nodes ?
Nov 11 01:40:09 squeeze puppet-agent[8683]: Could not send report: Error
403 on SERVER: Forbidden request: puppetdb1.puppet.test(192.168.168.12)
access to /report/puppetdb1.puppet.test [save] authenticated at :6
ov 12, 2012 at 4:22 PM, Nick Fagerlund <
nick.fagerl...@puppetlabs.com> wrote:
>
>
> On Saturday, November 10, 2012 5:43:48 PM UTC-8, Felipe Salum wrote:
>>
>> Is this related to the same error I have when I run the puppet agent on
>> my nodes ?
>>
> Nov 11 01:4
purposes.
If I start updating auth.conf to use 'auth no' everywhere it passes.
I don't see the problem on my production servers, so it worries me more :)
On Monday, November 12, 2012 4:27:41 PM UTC-8, Felipe Salum wrote:
>
> Hi Nick.
>
> Actually this is a new environm
Hi there.
I'm setting up a Puppet 3 + PuppetDB environment with the following
architecture:
2 x puppetmaster/passenger with apache using Proxy Balance
1 x puppetdb
Following the Pro Puppet book, I set Apache on both puppetmasters to proxy
the CA requests to just 1 puppetmaster server, and any
http://docs.puppetlabs.com/puppetdb/1/using.html#using-the-inventory-service
If you are using puppetdb backend you just need to enable the dashboard
inventory service and it will automatically get the inventory information
from puppetdb. No more need to add those mysql settings on puppet.conf fo
Try:
:hierarchy:
- "%{certname}"
- common
Felipe
On Tuesday, November 20, 2012 8:14:10 AM UTC-8, FĂ©lix Barbeira wrote:
>
> Hi, I have a puppetmaster - agent architecture. I have a module for the
> vsftpd configuracion in the agents. The configuration of the value
> 'max_per_ip' in the agen
:33:03 PM UTC-8, Felipe Salum wrote:
>
> Hi there.
>
> I'm setting up a Puppet 3 + PuppetDB environment with the following
> architecture:
>
> 2 x puppetmaster/passenger with apache using Proxy Balance
> 1 x puppetdb
>
> Following the Pro Puppet book, I set Ap
.
Felipe
On Tuesday, November 13, 2012 2:28:29 PM UTC-8, Felipe Salum wrote:
>
> I'm also having the same issue on the other locations. Not sure what's
> wrong since this is a default installation of puppet 3 with the original
> auth.conf
>
> Error:
> /Stage[main]/Pup
I had the same setup issue.
Go to your CA server and copy the puppet master unique certname .pem from
/var/lib/puppet/ssl/{certs,private_key/ to both your puppet master workers
and restart apache.
Also make sure to follow this:
http://docs.puppetlabs.com/guides/scaling_multiple_masters.html
t; But could somebody help me understand why each masters should have ca
> server's private key?
> How exactly this authentication process works?
>
> On Thursday, November 29, 2012 11:55:08 PM UTC+5:30, Felipe Salum wrote:
>>
>> I had the same setup issue.
>>
>>
On Apache/Passenger I have set a few headers:
RequestHeader set X-SSL-Subject %{SSL_CLIENT_S_DN}e
RequestHeader set X-Client-DN %{SSL_CLIENT_S_DN}e
RequestHeader set X-Client-Verify %{SSL_CLIENT_VERIFY}e
And then updated puppet.conf as below:
[master]
ssl_client_header = HTTP_X_SSL_SUBJ
I have done something similar, installing puppet master, puppetdb and a few
nodes for testing, everything via vagrant.
https://github.com/fsalum/vagrant-puppet
Felipe
On Tuesday, January 29, 2013 6:42:42 AM UTC-8, blalor wrote:
>
> I took an hour this morning to document how I use Vagrant and
I was searching about these some days ago and I found somewhere that the
'puppet node deactivate ' would do the tricky.
I haven't tried it yet, so I can't confirm it works.
On Thursday, January 31, 2013 12:55:54 PM UTC-8, Chris Price wrote:
>
> David,
>
> Are you using PuppetDB? There was a thr
I had the same problems some days ago and here are my notes to install it:
yum install -y gcc ruby-devel libxml2 libxml2-devel libxslt libxslt-devel
make
When using fog 0.7.2 I got a lot of warnings while using puppet node_aws..
so I updated to latest fog and everything is working correctly now
Upgrade to latest fog and the errors are gone.
On Friday, February 1, 2013 1:34:50 AM UTC-8, PuppetUser wrote:
>
> Hi,
>
> I have installed fog 0.7.2 on linux amazon ami. Created ~/.fog file and
> gave AWS credentials in the file.
> For verification i used
>
> >ruby -rubygems -e 'require "fog
Are you having the problem after running 'puppet agent --test' a few times
or it just happens without any manual run ?
I was having the same problem some days ago, in my case because I was
running 'puppet agent --test', more specifically the --show_diff that is
used in --test was automatically
I wonder if PuppetLabs will work with Amazon to try to add Puppet as an
option to OpsWorks as well ?
I don't think people using Puppet with AWS in a stable fashion would try to
move to OpsWorks and migrate everything to Chef, but new customers/startups
would think twice in choosing Puppet if th
I use an in-house bootstrap script that:
- launch an ec2 instance
- sets the hostname
- add to route53
- install puppet
- run puppet agent --server my-puppetmaster
My puppetmaster only accepts requests from ec2 instances from my aws
account and auto sign the certificate, also installs puppet.con
Hi guys,
I'm trying to understand the error below, but I'm stuck on it.
root@puppet1:~# puppet module install ispavailability-logstash --modulepath
/etc/puppet/modules-0/
Notice: Preparing to install into /etc/puppet/modules-0 ...
Notice: Downloading from https://forge.puppetlabs.com ...
Error:
Thanks Ryan.
After a few upgrades (inifile,puppetdb,postgresql and a few others) I got
my dependency issue fixed :)
Regards,
Felipe
On Mon, Mar 25, 2013 at 4:17 PM, Ryan Coleman wrote:
> Hi Felipe,
>
> Comment in-line.
>
>
> On Mon, Mar 25, 2013 at 4:06 PM, Felipe Salum w
Right time for this :)
If I'm running this outside the localhost, do I need any specific auth
permission ?
Regards,
Felipe
On Wednesday, March 27, 2013 7:21:09 AM UTC-7, Ken Barber wrote:
>
> Here is a better working example as a gist, with what you should see
> in the puppetdb.log if it was s
Great, thanks Deepak.
On Thursday, March 28, 2013 9:25:49 AM UTC-7, Deepak Giridharagopal wrote:
>
> On Thu, Mar 28, 2013 at 10:20 AM, Felipe Salum
> > wrote:
>
>> Right time for this :)
>>
>> If I'm running this outside the localhost, do I need any spe
Ken,
Looks like the new parameters (report_ttl, node_ttl, node_purge_ttl) are
not working. It is not being sent by the puppetdb::server class to the
database_ini class. Neither from the puppetdb main class to the
puppetdb::server class.
It is also not being used in database_ini as it is set to
I did an autoscaling pilot some months ago and I needed something similar
to this. For the pilot to work (far from the best solution), I was
triggering a puppet run on the server (DAS in your case) from the new
launched autoscaling instance, using mco as you noted if you have mco
working in you
I'm basically doing the same.
I have replicated my production environment in Vagrant, that means the
puppetmaster/puppetdb setup as well as the app,db,cache,api layers are
identical to production in the vagrant setup.
After all tests are done in Vagrant, destroying and re-creating the VMs
from
Hi guys,
After a disk space issue puppet is complaining when agents are running.
# puppet cert list --all
Error: header too long
I think my certificates get corrupted, but the /var/lib/puppet/ssl
directory seems to be ok.
Have you seen this before ?
Regards,
Felipe
--
You received this mess
Is Puppetlabs planning some easy solution for this ?
The real problem I see is that you can't separate the Puppet CA from the
Puppet Master. So even that you can have multiple puppet masters, the CA
must run in one of them. So if that server goes down your multiple puppet
master setup is screwe
, May 8, 2013 4:58:21 PM UTC-7, John Warburton wrote:
>
> On 9 May 2013 05:57, Felipe Salum > wrote:
>
>> Is Puppetlabs planning some easy solution for this ?
>>
>
> I run 12 puppet servers around the world. They work in a multiple puppet
> master solution where any
I'm trying to make a manifest to auto setup Puppet High Availability, but
it is the chicken-egg issue here. As for your secondary/tertiary/etc
puppetmasters, you need to copy the private key and certificate used by
your puppet1 server in order for it to accept the requests coming from
puppet.yourco
If you don't need to backup your puppetca, how do you carry over to a
standby puppetca server your client signed certificates and revocation list
in case of failure in the production puppetca ?
On Tue, May 14, 2013 at 8:04 AM, Mason Turner wrote:
> We have a similar setup, minus the SRV records
Hi guys,
After upgrading my puppetmaster from 3.1.1 to 3.2.1 I'm getting the error
below:
Error: Could not retrieve catalog from remote server: Error 400 on SERVER:
Failed to parse template dashboard/passenger-vhost.erb:
Filepath: /usr/lib/ruby/vendor_ruby/puppet/parser/templatewrapper.rb
L
he same problem and
> found that my problem was corrupt certificate requests were generated while
> the disk was full. After I cleaned up the disk, I had to clear out the
> /var/lib/puppet/ssl/ca/requests directory and then everything worked fine.
>
> Jason
>
> On Tuesday, Ma
Hi guys,
The command below was working when my puppet dashboard mysql database was
running in the same machine as the puppet master.
rake RAILS_ENV=production -f /usr/share/puppet-dashboard/Rakefile node:del
name=my-app-server
However now I moved the mysql database to RDS and it doesn't work a
I had a friend helping me to debug and looks like it is taking forever to
delete the entries from resource status table (a lot of entries).
Is that something we can improve ?
On Wednesday, July 24, 2013 11:42:26 AM UTC-7, Felipe Salum wrote:
>
> Hi guys,
>
> The command below was
I actually do for 2 weeks.
rake RAILS_ENV=production reports:prune upto=2 unit=wk
On Wed, Jul 24, 2013 at 2:03 PM, Ramin K wrote:
> On 7/24/2013 1:49 PM, Felipe Salum wrote:
>
>> I had a friend helping me to debug and looks like it is taking forever
>> to delete the ent
d of the records it plans to delete into
> memory. This can be quite large and is another reason to break it into
> smaller chunks.
>
> Ramin
>
>
> On 7/24/2013 2:08 PM, Felipe Salum wrote:
>
>> I actually do for 2 weeks.
>>
>> rake RAILS_ENV=production reports:pru
Hi guys.
I'm trying to find the root cause of my puppet dashboard report failed
tasks that happen every night. It works without any error all day long but
when I connect in the morning to check it has around 3000 failed tasks from
the night before.
Any advice what could be impacting puppet das
I checked that, nothing that could impact this. But I'm probably missing
something since it is every night no exception.
On Tue, Aug 27, 2013 at 10:15 AM, Peter Bukowinski wrote:
> Do you have any daily cron jobs that occur overnight?
>
> --
> Peter
>
> On Aug 27, 2
Can you paste your /etc/httpd/conf.d/puppetmaster.conf ?
On Wednesday, October 2, 2013 5:35:58 AM UTC-7, Pete Hartman wrote:
>
> I do not have responsibility for the F5's and I'm not sure what my
> networking team would be willing to do in terms of custom rules no
> matter how simple.
>
> The u
This is how I do it here
https://github.com/fsalum/vagrant-puppet/blob/master/puppetmaster/templates/etc/apache2/sites-available/puppetmaster_balancer.erb
https://github.com/fsalum/vagrant-puppet/blob/master/puppetmaster/files/etc/apache2/sites-available/puppetmaster_ca
https://github.com/fsalum/v
Hi guys.
Wondering if there was a reason to change the location for the config.ru
between the puppet 3.3.x and 3.4.x ?
3.3.x: /usr/share/puppet/ext/rack/files/config.ru
3.4.x: /usr/share/puppet/ext/rack/config.ru
It just broke my auto setup of puppetmaster since I run a exec to copy the
file
I searched around about this but couldn't find if puppetlabs is working in
a fix or not.
facter running in AWS VPC doesn't show ec2 facts:
# facter --version
2.0.1
# facter -p virtual
xenhvm
# facter -p | grep ec2
Returns nothing.
Thanks,
Felipe
--
You received this message because you are s
I work around this by using a cloudinit script during the autoscale
instance launch that gets the instance-id of the instance, rename the
hostname and update /etc/hosts before running puppet.
On Friday, May 23, 2014 10:54:04 PM UTC-7, Bad Tux wrote:
>
> So I'm using Amazon's amazing EC2 autoscal
wrote:
> On 27.05.2014 11:06, Felipe Salum wrote:
>
>> I work around this by using a cloudinit script during the autoscale
>> instance launch that gets the instance-id of the instance, rename the
>> hostname and update /etc/hosts before running puppet.
>>
>&
I use a different approach to clean up certificates and the node on the
puppet dashboard, but it is a ugly hack. I'm writing something in python to
read the autoscaling termination message posted to SNS->SQS and I should
have something up tonight. I will share here and get feedback, I'm planning
to
52 matches
Mail list logo