[Puppet Users] Re: Anyone using config_version successfully?

2012-08-23 Thread Glenn Poston
I second this.  I would prefer this config_version to update at compile 
time with each client request.

On Monday, January 31, 2011 6:58:29 PM UTC-5, John Warburton wrote:
>
> I have tried to use config_version and failed due to limitations in the 
> way I would like to use it:
> - http://projects.puppetlabs.com/issues/3692
> - http://projects.puppetlabs.com/issues/4845
> - http://projects.puppetlabs.com/issues/5404
>
> As part of http://projects.puppetlabs.com/issues/3692, we'd like to get a 
> handle on who in the puppet community is using config_version successfully 
> as it stands
>
> Thanks
>
> John
>

-- 
You received this message because you are subscribed to the Google Groups 
"Puppet Users" group.
To view this discussion on the web visit 
https://groups.google.com/d/msg/puppet-users/-/Ax9NP8llTScJ.
To post to this group, send email to puppet-users@googlegroups.com.
To unsubscribe from this group, send email to 
puppet-users+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/puppet-users?hl=en.



[Puppet Users] How to delete previous resources from directory

2012-08-28 Thread Glenn Poston
I'm using hiera to define a list of database engine id's for a node.  I'm
using this variable to feed into a defined type to create a set of files
(upstart configs).  The problem I would like to solve isŠ.

How do I delete files/resources that were previously created but are now
removed (without deleting files that are there by default)?

Here is an example (Please ignore syntax. I hope it's clear what I'm trying
to do.  If not, let me know).

/hiera/config.yaml
--
engines:
- 1
- 2

/modules/queue/manifests/upstart.pp
--
define upstart{
  file{"/etc/init/queue-$name.conf":
ensure  => file,
content => template('queue/upstart.conf.erb'),
# The below would be nice, but is not supported
#purge  => true,
#ignore => !(queue*)
  }
}

/modules/queue/manifests/init.pp
--
define queue {
  upstart::queue{hiera_lookup('engines'):}
}

Puppet apply will create the following files correctly:
/etc/init/queue-1.conf
/etc/init/queue-2.conf

Now I update the hiera config file to only include 1 engine:

/hiera/config.yaml
--
engines:
- 1

But the /etc/init/queue02.conf file is not removed.

I can't purge the directory because there are other scripts in that
directory that I do not want to remove.
If ruby supported extended globs I could use the purge and ignore parameters
to accomplish this.

Is there a good pattern for handling this?


-- 
You received this message because you are subscribed to the Google Groups 
"Puppet Users" group.
To post to this group, send email to puppet-users@googlegroups.com.
To unsubscribe from this group, send email to 
puppet-users+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/puppet-users?hl=en.



[Puppet Users] How can I ensure that a service is started/stopped based on if it is needed or not ?

2012-09-08 Thread Glenn Poston
You could create a fact that returns true or false, then use that fact in your 
puppet code to determine whether to start the service.  I'd need more details 
to really say that's the best solution though.  Other options would be to use 
heira or to use a parameterized class and set those parameters in your ENC/mode 
definition.

-- 
You received this message because you are subscribed to the Google Groups 
"Puppet Users" group.
To view this discussion on the web visit 
https://groups.google.com/d/msg/puppet-users/-/yoOmxcLYlVAJ.
To post to this group, send email to puppet-users@googlegroups.com.
To unsubscribe from this group, send email to 
puppet-users+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/puppet-users?hl=en.



[Puppet Users] Re: Systems Provisioning

2012-09-17 Thread Glenn Poston
Similar setup here:  pxe boot, etc.  but we keep a pxe booted sparse base image 
available to clone.  And use a script to to do the clone so that we can 
provision more quickly (saves us the pxe boot time).  The script also interacts 
with puppet dashboard to set the machine role.  We can provision a new machine 
in less than 4 minutes.

-- 
You received this message because you are subscribed to the Google Groups 
"Puppet Users" group.
To view this discussion on the web visit 
https://groups.google.com/d/msg/puppet-users/-/ssItzXfne8oJ.
To post to this group, send email to puppet-users@googlegroups.com.
To unsubscribe from this group, send email to 
puppet-users+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/puppet-users?hl=en.



Re: [Puppet Users] Re: Systems Provisioning

2012-09-19 Thread Glenn Poston
We have one script that is called to bootstrap and provision new VMs.  There's 
another one for destroying.  These scripts are also responsible for logging 
into the puppet master (via ssh), polling for the cert to be created (puppet 
cert list) and then issuing the sign/clean commands.

-- 
You received this message because you are subscribed to the Google Groups 
"Puppet Users" group.
To view this discussion on the web visit 
https://groups.google.com/d/msg/puppet-users/-/79DXTdWQjqkJ.
To post to this group, send email to puppet-users@googlegroups.com.
To unsubscribe from this group, send email to 
puppet-users+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/puppet-users?hl=en.



[Puppet Users] Staging environment

2012-09-19 Thread Glenn Poston
Use heira.  Heira can load a config file based on the environment.  Setting 
this up is as simple as creating a hiera definition such as...

(environment).yaml

... And then creating the following files:  production.yaml and staging.yaml 
that contain your environment specific configs.

Install the puppet-hiera gem, then you can lookup the appropriate command 
inside puppet using heira_lookup(param_name).

For an example, checkout the vagrant-hiera project on github.  If your 
unfamiliar with vagrant, you'll need to install that to run the example.  It's 
a great tool for doing local puppet testing.  Even if you don't want to install 
vagrant,  looking at the source of that project will show you a quick example 
of how to use hiera.

-- 
You received this message because you are subscribed to the Google Groups 
"Puppet Users" group.
To view this discussion on the web visit 
https://groups.google.com/d/msg/puppet-users/-/YlJx3dHegBsJ.
To post to this group, send email to puppet-users@googlegroups.com.
To unsubscribe from this group, send email to 
puppet-users+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/puppet-users?hl=en.



[Puppet Users] Testing for dependency loops?

2012-09-19 Thread Glenn Poston
I would suggest looking into using Vagrant for local testing before pushing 
code.  Vagrant up.com

-- 
You received this message because you are subscribed to the Google Groups 
"Puppet Users" group.
To view this discussion on the web visit 
https://groups.google.com/d/msg/puppet-users/-/zVMH02fI2rkJ.
To post to this group, send email to puppet-users@googlegroups.com.
To unsubscribe from this group, send email to 
puppet-users+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/puppet-users?hl=en.



[Puppet Users] Re: How to group hosts?

2012-12-10 Thread Glenn Poston
We use facter-dot-d as well, but we also take advantage of 'calling_module' 
for grouping.  This param is automatically available to heira.

Here is our heira.yaml (role is in facter.d)

:hierarchy: - %{environment}/%{fqdn}
- %{environment}/%{role}
- %{environment}/%{calling_module}
- %{environment}
- common/%{role}
- common/%{calling_module}
- common

On Thursday, December 6, 2012 12:27:51 PM UTC-5, Jakov Sosic wrote:
>
> Hi. 
>
> I'm currently using hiera in a very rudimentary way, using only perhost 
> and common. 
>
> Now, I'm trying to group my hosts a little bit, so for example web 
> servers could have their own yaml with data. Problem is I don't have an 
> idea how to group hosts? How can I say to puppet that for example hosts: 
>
> storage01 
> storage02 
> storage03 
> storage04 
>
> belong to group storage_nodes, and 
>
> web01 
> web02 
>
> belong to group web_nodes? 
>
>
> How do you do that? 
>
>
> Thank you 
>

-- 
You received this message because you are subscribed to the Google Groups 
"Puppet Users" group.
To view this discussion on the web visit 
https://groups.google.com/d/msg/puppet-users/-/L0487ZayqvAJ.
To post to this group, send email to puppet-users@googlegroups.com.
To unsubscribe from this group, send email to 
puppet-users+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/puppet-users?hl=en.



[Puppet Users] Random Yum errors during provisioning

2013-06-28 Thread Glenn Poston
Running Amazon Linux (which is essentially Centos5.5).

Anyone seen random yum errors like this one?  I don't think 
it's necessarily related to Puppet, but it randomly fails my puppet runs 
and I don't know how to fix it.

Jun 28 08:41:34 ip-10-159-65-145 run_puppet: [Notice: 
/Stage[main]/Zookeeper/Package[zookeeper]/ensure: created
Jun 28 08:41:34 ip-10-159-65-145 run_puppet: [Notice: 
/Stage[main]/Zookeeper/File[/var/lib/zookeeper/data]/ensure: created
Jun 28 08:41:34 ip-10-159-65-145 run_puppet: [Notice: 
/Stage[main]/Zookeeper/File[/var/lib/zookeeper/data/myid]/ensure: created
Jun 28 08:41:34 ip-10-159-65-145 run_puppet: [Notice: 
/Stage[main]/Yum_repo::Configs/File[/etc/yum.repos.d/inin-epel.repo]/ensure: 
defined content as '{md5}b94171f63e31f07b8bd75444073e301c'
Jun 28 08:41:35 ip-10-159-65-145 run_puppet: [Notice: 
/Stage[main]/Zookeeper/File[/etc/zookeeper/zookeeper-env.sh]/content: 
content changed '{md5}cd666c7520ce5279ddbc185512b0b177' to 
'{md5}5cb59b25f5e7567d94ba14b06f6e7081'
Jun 28 08:41:38 ip-10-159-65-145 run_puppet: [Error: Execution of 
'/usr/bin/yum -d 0 -e 0 -y install daemonize' returned 1: Existing lock 
/var/run/yum.pid: another copy is running as pid 2502.
Jun 28 08:41:38 ip-10-159-65-145 run_puppet: Another app is currently 
holding the yum lock; waiting for it to exit...
Jun 28 08:41:38 ip-10-159-65-145 run_puppet:   The other application is: yum
Jun 28 08:41:38 ip-10-159-65-145 run_puppet: Memory :  40 M RSS (235 MB 
VSZ)
Jun 28 08:41:38 ip-10-159-65-145 run_puppet: Started: Fri Jun 28 
08:41:33 2013 - 00:03 ago
Jun 28 08:41:38 ip-10-159-65-145 run_puppet: State  : Running, pid: 2502
Jun 28 08:41:38 ip-10-159-65-145 run_puppet: Error: database disk image is 
malformed
Jun 28 08:41:38 ip-10-159-65-145 run_puppet: [Error: 
/Stage[main]/Mcollective/Package[daemonize]/ensure: change from absent to 
present failed: Execution of '/usr/bin/yum -d 0 -e 0 -y install daemonize' 
returned 1: Existing lock /var/run/yum.pid: another copy is running as pid 
2502.
Jun 28 08:41:38 ip-10-159-65-145 run_puppet: Another app is currently 
holding the yum lock; waiting for it to exit...
Jun 28 08:41:38 ip-10-159-65-145 run_puppet:   The other application is: yum
Jun 28 08:41:38 ip-10-159-65-145 run_puppet: Memory :  40 M RSS (235 MB 
VSZ)
Jun 28 08:41:38 ip-10-159-65-145 run_puppet: Started: Fri Jun 28 
08:41:33 2013 - 00:03 ago
Jun 28 08:41:38 ip-10-159-65-145 run_puppet: State  : Running, pid: 2502
Jun 28 08:41:38 ip-10-159-65-145 run_puppet: Error: database disk image is 
malformed
Jun 28 08:41:38 ip-10-159-65-145 run_puppet: [mNotice: 
/Stage[main]/Zookeeper/File[/etc/zookeeper/zoo.cfg]/content: content 
changed '{md5}5c543298c5572c3caf40a3d108309019' to 
'{md5}31db609f6601a8a02561d411e98db12b'
Jun 28 08:41:39 ip-10-159-65-145 run_puppet: [mNotice: 
/Stage[main]/Puppet/File[/usr/local/bin/run_puppet.sh]/content: content 
changed '{md5}4e9496313a0b4152c663defce5100af5' to 
'{md5}49d78e473fa2202dea13e9b195e63575'
Jun 28 08:41:39 ip-10-159-65-145 run_puppet: [mNotice: 
/Stage[main]/Yum_repo::Configs/Exec[puppet_repo]/returns: executed 
successfully
Jun 28 08:41:48 ip-10-159-65-145 run_puppet: [mNotice: 
/Stage[main]/Mcollective::Common/Package[mcollective-package-agent]/ensure: 
created

The problem does not persist.  Yum packages are installed by puppet before 
and after the errors.  A subsequent puppet run installs the previously 
skipped packages fine.

It's as if some background process creates a lock, while updating the yum 
DB, but when the lock is released, the yum DB is still in a bad state 
(momentarily).

-- 
You received this message because you are subscribed to the Google Groups 
"Puppet Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to puppet-users+unsubscr...@googlegroups.com.
To post to this group, send email to puppet-users@googlegroups.com.
Visit this group at http://groups.google.com/group/puppet-users.
For more options, visit https://groups.google.com/groups/opt_out.




[Puppet Users] Re: Random Yum errors during provisioning

2013-09-03 Thread Glenn Poston
Any ideas???

On Friday, June 28, 2013 9:38:29 AM UTC-4, Glenn Poston wrote:
>
> Running Amazon Linux (which is essentially Centos5.5).
>
> Anyone seen random yum errors like this one?  I don't think 
> it's necessarily related to Puppet, but it randomly fails my puppet runs 
> and I don't know how to fix it.
>
> Jun 28 08:41:34 ip-10-159-65-145 run_puppet: [Notice: 
> /Stage[main]/Zookeeper/Package[zookeeper]/ensure: created
> Jun 28 08:41:34 ip-10-159-65-145 run_puppet: [Notice: 
> /Stage[main]/Zookeeper/File[/var/lib/zookeeper/data]/ensure: created
> Jun 28 08:41:34 ip-10-159-65-145 run_puppet: [Notice: 
> /Stage[main]/Zookeeper/File[/var/lib/zookeeper/data/myid]/ensure: created
> Jun 28 08:41:34 ip-10-159-65-145 run_puppet: [Notice: 
> /Stage[main]/Yum_repo::Configs/File[/etc/yum.repos.d/inin-epel.repo]/ensure: 
> defined content as '{md5}b94171f63e31f07b8bd75444073e301c'
> Jun 28 08:41:35 ip-10-159-65-145 run_puppet: [Notice: 
> /Stage[main]/Zookeeper/File[/etc/zookeeper/zookeeper-env.sh]/content: 
> content changed '{md5}cd666c7520ce5279ddbc185512b0b177' to 
> '{md5}5cb59b25f5e7567d94ba14b06f6e7081'
> Jun 28 08:41:38 ip-10-159-65-145 run_puppet: [Error: Execution of 
> '/usr/bin/yum -d 0 -e 0 -y install daemonize' returned 1: Existing lock 
> /var/run/yum.pid: another copy is running as pid 2502.
> Jun 28 08:41:38 ip-10-159-65-145 run_puppet: Another app is currently 
> holding the yum lock; waiting for it to exit...
> Jun 28 08:41:38 ip-10-159-65-145 run_puppet:   The other application is: 
> yum
> Jun 28 08:41:38 ip-10-159-65-145 run_puppet: Memory :  40 M RSS (235 
> MB VSZ)
> Jun 28 08:41:38 ip-10-159-65-145 run_puppet: Started: Fri Jun 28 
> 08:41:33 2013 - 00:03 ago
> Jun 28 08:41:38 ip-10-159-65-145 run_puppet: State  : Running, pid: 
> 2502
> Jun 28 08:41:38 ip-10-159-65-145 run_puppet: Error: database disk image is 
> malformed
> Jun 28 08:41:38 ip-10-159-65-145 run_puppet: [Error: 
> /Stage[main]/Mcollective/Package[daemonize]/ensure: change from absent to 
> present failed: Execution of '/usr/bin/yum -d 0 -e 0 -y install daemonize' 
> returned 1: Existing lock /var/run/yum.pid: another copy is running as pid 
> 2502.
> Jun 28 08:41:38 ip-10-159-65-145 run_puppet: Another app is currently 
> holding the yum lock; waiting for it to exit...
> Jun 28 08:41:38 ip-10-159-65-145 run_puppet:   The other application is: 
> yum
> Jun 28 08:41:38 ip-10-159-65-145 run_puppet: Memory :  40 M RSS (235 
> MB VSZ)
> Jun 28 08:41:38 ip-10-159-65-145 run_puppet: Started: Fri Jun 28 
> 08:41:33 2013 - 00:03 ago
> Jun 28 08:41:38 ip-10-159-65-145 run_puppet: State  : Running, pid: 
> 2502
> Jun 28 08:41:38 ip-10-159-65-145 run_puppet: Error: database disk image is 
> malformed
> Jun 28 08:41:38 ip-10-159-65-145 run_puppet: [mNotice: 
> /Stage[main]/Zookeeper/File[/etc/zookeeper/zoo.cfg]/content: content 
> changed '{md5}5c543298c5572c3caf40a3d108309019' to 
> '{md5}31db609f6601a8a02561d411e98db12b'
> Jun 28 08:41:39 ip-10-159-65-145 run_puppet: [mNotice: 
> /Stage[main]/Puppet/File[/usr/local/bin/run_puppet.sh]/content: content 
> changed '{md5}4e9496313a0b4152c663defce5100af5' to 
> '{md5}49d78e473fa2202dea13e9b195e63575'
> Jun 28 08:41:39 ip-10-159-65-145 run_puppet: [mNotice: 
> /Stage[main]/Yum_repo::Configs/Exec[puppet_repo]/returns: executed 
> successfully
> Jun 28 08:41:48 ip-10-159-65-145 run_puppet: [mNotice: 
> /Stage[main]/Mcollective::Common/Package[mcollective-package-agent]/ensure: 
> created
>
> The problem does not persist.  Yum packages are installed by puppet before 
> and after the errors.  A subsequent puppet run installs the previously 
> skipped packages fine.
>
> It's as if some background process creates a lock, while updating the yum 
> DB, but when the lock is released, the yum DB is still in a bad state 
> (momentarily).
>

-- 
You received this message because you are subscribed to the Google Groups 
"Puppet Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to puppet-users+unsubscr...@googlegroups.com.
To post to this group, send email to puppet-users@googlegroups.com.
Visit this group at http://groups.google.com/group/puppet-users.
For more options, visit https://groups.google.com/groups/opt_out.


Re: [Puppet Users] Re: Random Yum errors during provisioning

2013-09-04 Thread Glenn Poston
We've disabled yum-updatesd on our nodes.

Thanks John,

Further investigation has shown that after any package update/install, yum 
automatically runs the following command…

/usr/bin/python /usr/bin/yum --security check-update

I'm curious if this is the issue.

Either way, it's odd that that the error is 'database disk image is malformed' 
and not 'Existing lock on yum.pid'

It's as if the yum is releasing the lock before the database is ready for 
subsequent requests.

~Glenn

On Sep 4, 2013, at 9:48 AM, jcbollinger  wrote:

> 
> 
> On Tuesday, September 3, 2013 1:41:22 PM UTC-5, Glenn Poston wrote:
> Any ideas???
> 
> On Friday, June 28, 2013 9:38:29 AM UTC-4, Glenn Poston wrote:
> Running Amazon Linux (which is essentially Centos5.5).
> 
> Anyone seen random yum errors like this one?  I don't think it's necessarily 
> related to Puppet, but it randomly fails my puppet runs and I don't know how 
> to fix it.
> 
> Jun 28 08:41:34 ip-10-159-65-145 run_puppet: [Notice: 
> /Stage[main]/Zookeeper/Package[zookeeper]/ensure: created
> Jun 28 08:41:34 ip-10-159-65-145 run_puppet: [Notice: 
> /Stage[main]/Zookeeper/File[/var/lib/zookeeper/data]/ensure: created
> Jun 28 08:41:34 ip-10-159-65-145 run_puppet: [Notice: 
> /Stage[main]/Zookeeper/File[/var/lib/zookeeper/data/myid]/ensure: created
> Jun 28 08:41:34 ip-10-159-65-145 run_puppet: [Notice: 
> /Stage[main]/Yum_repo::Configs/File[/etc/yum.repos.d/inin-epel.repo]/ensure: 
> defined content as '{md5}b94171f63e31f07b8bd75444073e301c'
> Jun 28 08:41:35 ip-10-159-65-145 run_puppet: [Notice: 
> /Stage[main]/Zookeeper/File[/etc/zookeeper/zookeeper-env.sh]/content: content 
> changed '{md5}cd666c7520ce5279ddbc185512b0b177' to 
> '{md5}5cb59b25f5e7567d94ba14b06f6e7081'
> Jun 28 08:41:38 ip-10-159-65-145 run_puppet: [Error: Execution of 
> '/usr/bin/yum -d 0 -e 0 -y install daemonize' returned 1: Existing lock 
> /var/run/yum.pid: another copy is running as pid 2502.
> Jun 28 08:41:38 ip-10-159-65-145 run_puppet: Another app is currently holding 
> the yum lock; waiting for it to exit...
> Jun 28 08:41:38 ip-10-159-65-145 run_puppet:   The other application is: yum
> Jun 28 08:41:38 ip-10-159-65-145 run_puppet: Memory :  40 M RSS (235 MB 
> VSZ)
> Jun 28 08:41:38 ip-10-159-65-145 run_puppet: Started: Fri Jun 28 08:41:33 
> 2013 - 00:03 ago
> Jun 28 08:41:38 ip-10-159-65-145 run_puppet: State  : Running, pid: 2502
> Jun 28 08:41:38 ip-10-159-65-145 run_puppet: Error: database disk image is 
> malformed
> Jun 28 08:41:38 ip-10-159-65-145 run_puppet: [Error: 
> /Stage[main]/Mcollective/Package[daemonize]/ensure: change from absent to 
> present failed: Execution of '/usr/bin/yum -d 0 -e 0 -y install daemonize' 
> returned 1: Existing lock /var/run/yum.pid: another copy is running as pid 
> 2502.
> Jun 28 08:41:38 ip-10-159-65-145 run_puppet: Another app is currently holding 
> the yum lock; waiting for it to exit...
> Jun 28 08:41:38 ip-10-159-65-145 run_puppet:   The other application is: yum
> Jun 28 08:41:38 ip-10-159-65-145 run_puppet: Memory :  40 M RSS (235 MB 
> VSZ)
> Jun 28 08:41:38 ip-10-159-65-145 run_puppet: Started: Fri Jun 28 08:41:33 
> 2013 - 00:03 ago
> Jun 28 08:41:38 ip-10-159-65-145 run_puppet: State  : Running, pid: 2502
> Jun 28 08:41:38 ip-10-159-65-145 run_puppet: Error: database disk image is 
> malformed
> Jun 28 08:41:38 ip-10-159-65-145 run_puppet: [mNotice: 
> /Stage[main]/Zookeeper/File[/etc/zookeeper/zoo.cfg]/content: content changed 
> '{md5}5c543298c5572c3caf40a3d108309019' to 
> '{md5}31db609f6601a8a02561d411e98db12b'
> Jun 28 08:41:39 ip-10-159-65-145 run_puppet: [mNotice: 
> /Stage[main]/Puppet/File[/usr/local/bin/run_puppet.sh]/content: content 
> changed '{md5}4e9496313a0b4152c663defce5100af5' to 
> '{md5}49d78e473fa2202dea13e9b195e63575'
> Jun 28 08:41:39 ip-10-159-65-145 run_puppet: [mNotice: 
> /Stage[main]/Yum_repo::Configs/Exec[puppet_repo]/returns: executed 
> successfully
> Jun 28 08:41:48 ip-10-159-65-145 run_puppet: [mNotice: 
> /Stage[main]/Mcollective::Common/Package[mcollective-package-agent]/ensure: 
> created
> 
> The problem does not persist.  Yum packages are installed by puppet before 
> and after the errors.  A subsequent puppet run installs the previously 
> skipped packages fine.
> 
> It's as if some background process creates a lock, while updating the yum DB, 
> but when the lock is released, the yum DB is still in a bad state 
> (momentarily).
> 
> 
> The diagnostic message claims that it is yum itself holding the lock.  Do you 
> have an automatic package update daemon running on the affected node

[Puppet Users] facter --timing does not show timing of external facts

2013-11-27 Thread Glenn Poston
>From http://docs.puppetlabs.com/guides/custom_facts.html


If you are interested in finding out where any bottlenecks are, you can run 
> Facter in timing mode and it will reflect how long it takes to parse your 
> external facts:facter --timingThe output should look similar to the 
> timing for Ruby facts, but will name external facts with their full paths. 
> For example:$ facter --timingkernel: 14.81ms/usr/lib/facter/ext/abc.sh: 
> 48.72ms/usr/lib/facter/ext/foo.sh: 32.69ms/usr/lib/facter/ext/full.json: 
> 104.71ms 

/usr/lib/facter/ext/sample.txt: 0.65ms

However,  from the output below you can see that facter is in fact loading 
external facts from script, but the timing data for that script is not 
being ouptut.


[root@fisheye-10-0-2-15 dist]# facter --version
1.7.3
[root@fisheye-10-0-2-15 dist]# ls /etc/facter/facts.d/
service_discovery.sh
[root@fisheye-10-0-2-15 dist]# /etc/facter/facts.d/service_discovery.sh
environment=test
vxml-generate_sqs=generateVXML
ivr-publish_sqs=edgeConfig
cassandra_keyspace=PureCloud
cassandra_cluster=analytics
analyticsApiSegmentIndexConsusmerMaxNumberOfMessages=1
analyticsApiSegmentIndexConsusmerWaitTimeSeconds=1
analyticsApiSegmentIndexConsusmerVisibilityTimeout=1
[root@fisheye-10-0-2-15 dist]# facter --timing
kernel: 1.58ms
lsbdistid: 0.11ms
lsbdistid: 0.10ms
operatingsystem: 5480.56ms
osfamily: 11421.73ms
macaddress: 0.07ms
macaddress: 0.06ms
macaddress: 4.51ms
hardwaremodel: 1.46ms
architecture: 2.51ms
rubysitedir: 0.08ms
memorysize_mb: 0.25ms
memorysize: 0.46ms
memorytotal: 0.58ms
netmask_eth1: 7.02ms
sshecdsakey: 0.03ms
sshecdsakey: 0.03ms
hostname: 2.34ms
blockdevice_sr0_size: 0.08ms
boardmanufacturer: 0.02ms
vlans: 0.13ms
vlans: 0.03ms
ipaddress6_lo: 6.48ms
ipaddress6_lo: 6.69ms
uniqueid: 2.49ms
operatingsystemrelease: 0.11ms
blockdevice_sr1_size: 0.02ms
uptime_seconds: 4.58ms
uptime_hours: 4.81ms
selinux: 2.50ms
augeasversion: 0.74ms
mtu_lo: 7.52ms
network_eth1: 15.86ms
id: 2.46ms
virtual: 0.04ms
virtual: 0.02ms
virtual: 23.01ms
processor0: 0.03ms
path: 0.02ms
lsbdistdescription: 0.21ms
lsbdistdescription: 0.10ms
ipaddress_eth0: 7.06ms
lsbrelease: 0.14ms
lsbrelease: 0.10ms
netmask_eth0: 6.71ms
ipaddress6: 4.57ms
ipaddress6: 5.06ms
serialnumber: 0.02ms
facterversion: 0.08ms
domain: 7.44ms
fqdn: 0.04ms
puppetversion: 207.49ms
physicalprocessorcount: 4.25ms
swapsize_mb: 0.35ms
ipaddress6_eth1: 6.42ms
ipaddress6_eth1: 6.21ms
memoryfree_mb: 0.28ms
memoryfree: 0.51ms
sshdsakey: 0.07ms
blockdevice_sda_vendor: 0.06ms
bios_version: 0.02ms
filesystems: 3.75ms
rubyversion: 0.03ms
cfkey: 0.05ms
cfkey: 0.04ms
mtu_eth1: 5.58ms
sshecdsakey: 0.05ms
sshecdsakey: 23.63ms
sshfp_ecdsa: 23.97ms
sshecdsakey: 0.09ms
sshecdsakey: 0.07ms
sshfp_ecdsa: 0.38ms
kernelrelease: 4.16ms
kernelversion: 4.50ms
boardproductname: 0.04ms
hardwareisa: 2.04ms
blockdevice_sr0_vendor: 0.11ms
macaddress_lo: 6.09ms
macaddress_lo: 6.76ms
uptime: 0.08ms
blockdevice_sr1_vendor: 0.13ms
swapfree_mb: 1.17ms
swapfree: 1.46ms
zfs_version: 0.24ms
zfs_version: 0.20ms
type: 0.05ms
network_lo: 12.34ms
processor1: 0.02ms
ipaddress6_eth0: 7.51ms
ipaddress6_eth0: 6.03ms
lsbdistrelease: 0.21ms
lsbdistrelease: 0.15ms
blockdevices: 0.06ms
manufacturer: 0.02ms
lsbdistid: 0.16ms
lsbdistid: 0.17ms
interfaces: 4.78ms
zpool_version: 0.19ms
zpool_version: 0.10ms
ps: 0.02ms
ipaddress: 4.23ms
netmask: 9.15ms
mtu_eth0: 6.52ms
sshrsakey: 0.09ms
uuid: 0.02ms
is_virtual: 0.04ms
swapsize: 0.04ms
macaddress_eth1: 6.00ms
sshfp_dsa: 1.37ms
blockdevice_sda_model: 0.09ms
bios_release_date: 0.02ms
ipaddress_lo: 6.25ms
timezone: 0.05ms
kernelmajversion: 0.06ms
boardserialnumber: 0.04ms
blockdevice_sr0_model: 0.08ms
uptime_days: 0.00ms
netmask_lo: 6.11ms
blockdevice_sr1_model: 0.08ms
network_eth0: 12.07ms
lsbdistcodename: 0.15ms
lsbdistcodename: 0.11ms
operatingsystemmajrelease: 0.04ms
processorcount: 0.16ms
lsbdistrelease: 0.15ms
lsbdistrelease: 0.13ms
lsbdistrelease: 0.69ms
lsbdistrelease: 0.16ms
lsbmajdistrelease: 1.39ms
lsbdistrelease: 0.14ms
lsbdistrelease: 0.05ms
lsbdistrelease: 0.19ms
lsbdistrelease: 0.14ms
lsbmajdistrelease: 0.77ms
macaddress_eth0: 7.75ms
productname: 0.03ms
ipaddress_eth1: 8.14ms
sshfp_rsa: 0.96ms
bios_vendor: 0.03ms
blockdevice_sda_size: 0.10ms
analytics-uri => http://analytics.inintca.com:9998
analyticsapisegmentindexconsusmermaxnumberofmessages => 1
analyticsapisegmentindexconsusmervisibilitytimeout => 1
analyticsapisegmentindexconsusmerwaittimeseconds => 1
architecture => x86_64
augeasversion => 0.9.0
bios_release_date => 12/01/2006
bios_vendor => innotek GmbH
bios_version => VirtualBox
blockdevice_sda_model => VBOX HARDDISK
blockdevice_sda_size => 214748364800
blockdevice_sda_vendor => ATA
blockdevice_sr0_model => CD-ROM
blockdevice_sr0_size => 1073741312
blockdevice_sr0_vendor => VBOX
blockdevice_sr1_model => CD-ROM
blockdevice_sr1_size => 1073741312
blockdevice_sr1_vendor => VBOX
blockdevices => sda,sr0,sr1
boardmanufacturer => Oracle Corporation
boardproductname => VirtualBox
boardserialnumber => 0
cas

[Puppet Users] external facts cause puppet apply to take inordinately longer to run

2013-11-27 Thread Glenn Poston
My external fact script takes 5s to run.

With external fact...
puppet takes 2.5m to run
facter takes 33s to run

Without external fact...
puppet takes 27s to run
facter takes 0.68s

Bottom line... there's no significant change in facter runtime when parsing 
the external fact, but the puppet runtime quadruples.

>From watching the logs in real time I can see that the extra time is taken 
before puppet outputs its first response line (compilation time).  Also 
note that the compilation time that puppet reports is ~2s even though (when 
watching the output realtime) it takes 2 minutes for that line to return 
when puppet is parsing the external fact script.

Note: This script generates 36 custom facts

Should I submit a bug for this?

#Time of external fact script
[root@fisheye-10-0-2-15 manifests]# time 
/etc/facter/facts.d/service_discovery.sh
environment=test
...
service_discovery_script=ran

real 0m5.478s
user 0m0.053s
sys 0m0.111s

# Time of puppet run with external fact
[root@fisheye-10-0-2-15 manifests]# time FACTER_environment='vagrant' 
FACTER_role='fisheye' puppet apply --modulepath 
'/etc/puppet/modules:/tmp/vagrant-puppet/modules-0' site.ppNotice: Compiled 
catalog for fisheye-10-0-2-15.inin.com in environment production in 2.22 
seconds
Notice: Finished catalog run in 30.76 seconds

real 2m25.856s
user 0m5.124s
sys 0m3.830s

#Time of facter with external fact
[root@fisheye-10-0-2-15 manifests]# time facter
analyticsapisegmentindexconsusmerwaittimeseconds => 1
architecture => x86_64
...
uptime_hours => 0
uptime_seconds => 2529

real 0m33.587s
user 0m0.658s
sys 0m0.849s

#Removing external fact script
[root@fisheye-10-0-2-15 manifests]# rm 
/etc/facter/facts.d/service_discovery.sh
rm: remove regular file `/etc/facter/facts.d/service_discovery.sh'? y
[root@fisheye-10-0-2-15 manifests]# ls /etc/facter/facts.d/

#Time of puppet run without external fact script
[root@fisheye-10-0-2-15 manifests]# time FACTER_environment='vagrant' 
FACTER_role='fisheye' puppet apply --modulepath 
'/etc/puppet/modules:/tmp/vagrant-puppet/modules-0' site.pp
Notice: Compiled catalog for fisheye-10-0-2-15.inin.com in environment 
production in 2.06 seconds
Notice: 
/Stage[main]/System::Facts/Facter::Fact[service_discovery]/File[/etc/facter/facts.d/service_discovery.sh]/ensure:
 
created
Notice: Finished catalog run in 23.22 seconds

real 0m27.550s
user 0m4.408s
sys 0m2.292s

# Removing script again (cuz puppet run put it back)
[root@fisheye-10-0-2-15 manifests]# rm 
/etc/facter/facts.d/service_discovery.sh
rm: remove regular file `/etc/facter/facts.d/service_discovery.sh'? y
[root@fisheye-10-0-2-15 manifests]# ls /etc/facter/facts.d/

#Time of facter run without external script
[root@fisheye-10-0-2-15 manifests]# time facter
architecture => x86_64
augeasversion => 0.9.0
...
virtual => virtualbox

real 0m0.687s
user 0m0.324s
sys 0m0.287s

-- 
You received this message because you are subscribed to the Google Groups 
"Puppet Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to puppet-users+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/puppet-users/264663b8-a346-4b02-8373-3f822b57c882%40googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.


[Puppet Users] Re: external facts cause puppet apply to take inordinately longer to run

2013-12-02 Thread Glenn Poston
Any ideas anyone?

[vagrant@fisheye-10-0-2-15 ~]$ facter --version
1.7.3
[vagrant@fisheye-10-0-2-15 ~]$ puppet --version
3.3.2

On Thursday, November 28, 2013 12:17:44 AM UTC-5, Glenn Poston wrote:
>
> My external fact script takes 5s to run.
>
> With external fact...
> puppet takes 2.5m to run
> facter takes 33s to run
>
> Without external fact...
> puppet takes 27s to run
> facter takes 0.68s
>
> Bottom line... there's no significant change in facter runtime when 
> parsing the external fact, but the puppet runtime quadruples.
>
> From watching the logs in real time I can see that the extra time is taken 
> before puppet outputs its first response line (compilation time).  Also 
> note that the compilation time that puppet reports is ~2s even though (when 
> watching the output realtime) it takes 2 minutes for that line to return 
> when puppet is parsing the external fact script.
>
> Note: This script generates 36 custom facts
>
> Should I submit a bug for this?
>
> #Time of external fact script
> [root@fisheye-10-0-2-15 manifests]# time 
> /etc/facter/facts.d/service_discovery.sh
> environment=test
> ...
> service_discovery_script=ran
>
> real 0m5.478s
> user 0m0.053s
> sys 0m0.111s
>
> # Time of puppet run with external fact
> [root@fisheye-10-0-2-15 manifests]# time FACTER_environment='vagrant' 
> FACTER_role='fisheye' puppet apply --modulepath 
> '/etc/puppet/modules:/tmp/vagrant-puppet/modules-0' site.ppNotice: Compiled 
> catalog for fisheye-10-0-2-15.inin.com in environment production in 2.22 
> seconds
> Notice: Finished catalog run in 30.76 seconds
>
> real 2m25.856s
> user 0m5.124s
> sys 0m3.830s
>
> #Time of facter with external fact
> [root@fisheye-10-0-2-15 manifests]# time facter
> analyticsapisegmentindexconsusmerwaittimeseconds => 1
> architecture => x86_64
> ...
> uptime_hours => 0
> uptime_seconds => 2529
>
> real 0m33.587s
> user 0m0.658s
> sys 0m0.849s
>
> #Removing external fact script
> [root@fisheye-10-0-2-15 manifests]# rm 
> /etc/facter/facts.d/service_discovery.sh
> rm: remove regular file `/etc/facter/facts.d/service_discovery.sh'? y
> [root@fisheye-10-0-2-15 manifests]# ls /etc/facter/facts.d/
>
> #Time of puppet run without external fact script
> [root@fisheye-10-0-2-15 manifests]# time FACTER_environment='vagrant' 
> FACTER_role='fisheye' puppet apply --modulepath 
> '/etc/puppet/modules:/tmp/vagrant-puppet/modules-0' site.pp
> Notice: Compiled catalog for fisheye-10-0-2-15.inin.com in environment 
> production in 2.06 seconds
> Notice: 
> /Stage[main]/System::Facts/Facter::Fact[service_discovery]/File[/etc/facter/facts.d/service_discovery.sh]/ensure:
>  
> created
> Notice: Finished catalog run in 23.22 seconds
>
> real 0m27.550s
> user 0m4.408s
> sys 0m2.292s
>
> # Removing script again (cuz puppet run put it back)
> [root@fisheye-10-0-2-15 manifests]# rm 
> /etc/facter/facts.d/service_discovery.sh
> rm: remove regular file `/etc/facter/facts.d/service_discovery.sh'? y
> [root@fisheye-10-0-2-15 manifests]# ls /etc/facter/facts.d/
>
> #Time of facter run without external script
> [root@fisheye-10-0-2-15 manifests]# time facter
> architecture => x86_64
> augeasversion => 0.9.0
> ...
> virtual => virtualbox
>
> real 0m0.687s
> user 0m0.324s
> sys 0m0.287s
>

-- 
You received this message because you are subscribed to the Google Groups 
"Puppet Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to puppet-users+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/puppet-users/76d164e0-5826-4b2e-89b7-bbd00f12c6e2%40googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.


[Puppet Users] Re: external facts cause puppet apply to take inordinately longer to run

2013-12-03 Thread Glenn Poston


On Tuesday, December 3, 2013 11:08:33 AM UTC-5, jcbollinger wrote:
>
>
>
> On Wednesday, November 27, 2013 11:17:44 PM UTC-6, Glenn Poston wrote:
>>
>> My external fact script takes 5s to run.
>>
>> With external fact...
>> puppet takes 2.5m to run
>> facter takes 33s to run
>>
>> Without external fact...
>> puppet takes 27s to run
>> facter takes 0.68s
>>
>> Bottom line... there's no significant change in facter runtime when 
>> parsing the external fact, but the puppet runtime quadruples.
>>
>
>
> Well, no significant change in facter's runtime give or take a factor of 
> x50.  What's two orders of magnitude between friends?
>
> Really, a five-second runtime for a fact is pretty extreme, but even that 
> seems not to match the timings you report -- it looks more like your 
> external fact takes 32 seconds to run.
>

When I run the external fact script by itself it takes 5s to run.  Facter 
takes 32 seconds longer to run.

Here's the timing data for the run of the script outside of facter:

#Time of external fact script
[root@fisheye-10-0-2-15 manifests]# time 
/etc/facter/facts.d/service_discovery.sh
environment=test
...
service_discovery_script=ran

real 0m5.478s
user 0m0.053s
sys 0m0.111s
 

>
> In the past, Puppet has sometimes had issues with unnecessarily evaluating 
> facts more than once, so perhaps such an issue is magnifying the effect of 
> your slow fact.  Or perhaps the fact value is extremely large.  Or maybe 
> evaluating the fact leaves the node in a state that slows or delays 
> communication with the master.
>

I would agree that a good theory is that puppet is unnecessarily evaluating 
facts more than once.  Also, these puppet runs are in standalone mode, so 
the master is not an issue.  The script will resolve 32 facts.  Each fact 
has a key that is less than 20 characters and a value that is less than 100 
characters.
 

>
> It is probably worth your while to run the agent with the "--test 
> --evaltrace" options to get a more detailed breakdown of how the time is 
> being consumed.  Please post the whole timing report.
>

Since we're running puppet in standalone mode, --test and --evaltrace 
aren't available.  I ran in --debug mode, but there's nothing being output 
about facter resolution so that wasn't very helpful.
 

>
> Also, it might be helpful to know what your external fact script is 
> actually doing.  Are you at liberty to post that?
>

The fact is making a curl request to a static file in AWS S3 that contains 
the key/value pairs.  Here is the external fact script (URL's have been 
changed)

#!/bin/bash
curl -s https://s3.amazonaws.com/blah/blah/blah | while read line; do
  key=`echo $line | cut -d = -f 1 | sed 's/\./_/g'`
  val=`echo $line | cut -d = -f 2`
  echo $key=$val
done
echo "service_discovery_script=ran"


>  
>
>>
>> From watching the logs in real time I can see that the extra time is 
>> taken before puppet outputs its first response line (compilation time). 
>>  Also note that the compilation time that puppet reports is ~2s even though 
>> (when watching the output realtime) it takes 2 minutes for that line to 
>> return when puppet is parsing the external fact script.
>>
>> Note: This script generates 36 custom facts
>>
>> Should I submit a bug for this?
>>
>>
>
> I think that would be premature at this time.  The issue is likely to be 
> tied to details of your external fact script, so although there may indeed 
> be a bug to report, I don't think we know yet what it might be.
>
>
> John
>
>

-- 
You received this message because you are subscribed to the Google Groups 
"Puppet Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to puppet-users+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/puppet-users/38190418-6036-494b-840d-ac4d07dc4bde%40googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.


[Puppet Users] Re: external facts cause puppet apply to take inordinately longer to run

2013-12-03 Thread Glenn Poston
I was curious how much of an effect the long running external fact was 
effecting the timing, so I updated the external fact script to simply echo 
the content (instead of fetching it from s3).  The facts it produced were 
the same as if it had fetched the facts from S3.

I then introduced a 'sleep x' before the echo statement in the script.  Now 
we see some compounding delays.  I think this adds a bit of support to the 
theory that puppet is unnecessarily re-evaluating facts, but it appears 
that facter may have some compounding delays as well.

Bottom Line:

When the script resolves instantaneously (echo statment only)
script takes .004s
facter takes .754s
puppet takes 1m

When the script takes 1s (sleep 1, then echo)
script takes 1s
facter takes 6s
puppet takes 1m12s

When the script takes 5s (sleep 5, then echo)
script takes 5s
facter takes 30s
puppet takes 2m38s

-- 
You received this message because you are subscribed to the Google Groups 
"Puppet Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to puppet-users+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/puppet-users/15127c27-91d5-452a-b322-af2b783d42fa%40googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.


[Puppet Users] Re: external facts cause puppet apply to take inordinately longer to run

2013-12-03 Thread Glenn Poston
This bug has been reported, but I think it's severity is understated. 
 Please vote for these bugs.

https://projects.puppetlabs.com/issues/22944 -> original
https://projects.puppetlabs.com/issues/23335 -> mine (duplicate)

-- 
You received this message because you are subscribed to the Google Groups 
"Puppet Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to puppet-users+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/puppet-users/e7ffbceb-32ca-4fe7-b90a-f75aaccf2f43%40googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.


Re: [Puppet Users] Re: external facts cause puppet apply to take inordinately longer to run

2013-12-03 Thread Glenn Poston
I hope no one takes this personally, but I've got to ask.  

I've been using external facts for 3 years.  I started with facts-dot-d 
(which I'm sure you're familiar with :).  This issue did not exist when I 
was using that beautiful little gem.  Why the re-write/integration when it 
worked perfectly fine?

On Tuesday, December 3, 2013 12:33:35 PM UTC-5, R.I. Pienaar wrote:
>
>
>
> - Original Message - 
> > From: "Glenn Poston" > 
> > To: puppet...@googlegroups.com  
> > Sent: Tuesday, December 3, 2013 5:30:30 PM 
> > Subject: [Puppet Users] Re: external facts cause puppet apply to take 
> inordinately longer to run 
> > 
> > 
> > 
> > On Tuesday, December 3, 2013 11:08:33 AM UTC-5, jcbollinger wrote: 
> > > 
> > > 
> > > 
> > > On Wednesday, November 27, 2013 11:17:44 PM UTC-6, Glenn Poston wrote: 
> > >> 
> > >> My external fact script takes 5s to run. 
> > >> 
> > >> With external fact... 
> > >> puppet takes 2.5m to run 
> > >> facter takes 33s to run 
> > >> 
> > >> Without external fact... 
> > >> puppet takes 27s to run 
> > >> facter takes 0.68s 
> > >> 
> > >> Bottom line... there's no significant change in facter runtime when 
> > >> parsing the external fact, but the puppet runtime quadruples. 
> > >> 
> > > 
> > > 
> > > Well, no significant change in facter's runtime give or take a factor 
> of 
> > > x50.  What's two orders of magnitude between friends? 
> > > 
> > > Really, a five-second runtime for a fact is pretty extreme, but even 
> that 
> > > seems not to match the timings you report -- it looks more like your 
> > > external fact takes 32 seconds to run. 
> > > 
> > 
> > When I run the external fact script by itself it takes 5s to run. 
>  Facter 
> > takes 32 seconds longer to run. 
> > 
> > Here's the timing data for the run of the script outside of facter: 
> > 
> > #Time of external fact script 
> > [root@fisheye-10-0-2-15 manifests]# time 
> > /etc/facter/facts.d/service_discovery.sh 
> > environment=test 
> > ... 
> > service_discovery_script=ran 
> > 
> > real 0m5.478s 
> > user 0m0.053s 
> > sys 0m0.111s 
> >   
>
> there was either a thread here or a ticket files that puppet will run 
> external facts 
> as much as 5 times over for a single run, might be worth adding some debug 
> logging 
> to a file or something you in your fact to see if this is the case? 
>

-- 
You received this message because you are subscribed to the Google Groups 
"Puppet Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to puppet-users+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/puppet-users/4a9aa501-8e26-49c7-b7c0-801d44227156%40googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.


Re: [Puppet Users] Re: external facts cause puppet apply to take inordinately longer to run

2013-12-03 Thread Glenn Poston
Thanks Josh... I made some comments on that ticket and added a vote.

On Tuesday, December 3, 2013 1:15:49 PM UTC-5, Josh Cooper wrote:
>
>
>
>
> On Tue, Dec 3, 2013 at 10:01 AM, Glenn Poston 
> > wrote:
>
>> I was curious how much of an effect the long running external fact was 
>> effecting the timing, so I updated the external fact script to simply echo 
>> the content (instead of fetching it from s3).  The facts it produced were 
>> the same as if it had fetched the facts from S3.
>>
>> I then introduced a 'sleep x' before the echo statement in the script. 
>>  Now we see some compounding delays.  I think this adds a bit of support to 
>> the theory that puppet is unnecessarily re-evaluating facts, but it appears 
>> that facter may have some compounding delays as well.
>>
>> Bottom Line:
>>
>> When the script resolves instantaneously (echo statment only)
>> script takes .004s
>> facter takes .754s
>> puppet takes 1m
>>
>> When the script takes 1s (sleep 1, then echo)
>> script takes 1s
>> facter takes 6s
>> puppet takes 1m12s
>>
>> When the script takes 5s (sleep 5, then echo)
>> script takes 5s
>> facter takes 30s
>> puppet takes 2m38s
>>
>> -- 
>> You received this message because you are subscribed to the Google Groups 
>> "Puppet Users" group.
>> To unsubscribe from this group and stop receiving emails from it, send an 
>> email to puppet-users...@googlegroups.com .
>> To view this discussion on the web visit 
>> https://groups.google.com/d/msgid/puppet-users/15127c27-91d5-452a-b322-af2b783d42fa%40googlegroups.com
>> .
>>
>> For more options, visit https://groups.google.com/groups/opt_out.
>>
>
> This is https://projects.puppetlabs.com/issues/22944. Facter will execute 
> external facts multiple times. I've seen it as little as 6 on Mac and 19 on 
> Windows. We are working on a fix for this to be released in 1.7.4.
>
> Josh
>
> -- 
> Josh Cooper
> Developer, Puppet Labs
>  

-- 
You received this message because you are subscribed to the Google Groups 
"Puppet Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to puppet-users+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/puppet-users/83f71c91-20ea-466b-b034-988b14814666%40googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.


[Puppet Users] Re: Puppet fails on aws instance using ruby19 yum install

2013-12-04 Thread Glenn Poston
I'm having the exact same issue.  Anyone had success with puppet 3 ruby19 
and Amazon linux?  Any help?

On Monday, October 7, 2013 11:27:41 AM UTC-4, Robert Logan wrote:
>
>
> I've been trying to use the ruby19 and rubygems19 packages from amazons 
> yum repo but cant get around this issue:
>
> [root@ip-10-234-225-44 ~]# puppetd --test
> /usr/share/rubygems1.9/rubygems/custom_require.rb:36:in `require': cannot 
> load such file -- puppet/application/agent (LoadError)
> from /usr/share/rubygems1.9/rubygems/custom_require.rb:36:in 
> `require'
> from /usr/sbin/puppetd:3:in `'
>
> The above occurs using the amazon repo puppet install (2.7.23) at time of 
> writing, and the puppetlabs yum repo below is the same ..
>
> [root@ip-10-234-225-44 ~]# puppet agent --test
> /usr/share/rubygems1.9/rubygems/custom_require.rb:36:in `require': cannot 
> load such file -- puppet/util/command_line (LoadError)
> from /usr/share/rubygems1.9/rubygems/custom_require.rb:36:in 
> `require'
> from /usr/bin/puppet:3:in `'
>
>
> [root@ip-10-234-225-44 ~]# ruby --version
> ruby 1.9.3p448 (2013-06-27 revision 41675) [x86_64-linux]
> [root@ip-10-234-225-44 ~]# ruby -e 'puts $:'
> /usr/local/share/ruby19/site_ruby
> /usr/local/lib64/ruby19/site_ruby
> /usr/share/ruby/1.9/vendor_ruby
> /usr/lib64/ruby/1.9/vendor_ruby
> /usr/share/rubygems1.9
> /usr/share/ruby/1.9
> /usr/lib64/ruby/1.9
>
> Does anyone have any ideas on this?
>
>
> --
>
> The information in this email is confidential and may be legally 
> privileged.  It is intended solely for the addressee.  Access to this email 
> by anyone else is unauthorised.  If you are not the intended recipient, any 
> disclosure, copying, distribution or any action taken or omitted to be 
> taken in reliance on it, is prohibited and may be unlawful.
>  
> Policy Expert is a trading name of QMetric Group Limited who is authorised 
> and regulated by the Financial Services Authority.  The registered company 
> address of QMetric Group Limited is: 32-38 Dukes Place, London, EC3A 7LP 
> and its company registration number is 07151701.
>
>

-- 
You received this message because you are subscribed to the Google Groups 
"Puppet Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to puppet-users+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/puppet-users/0d7b5592-015c-4fb9-ac48-dd6ed86c00b5%40googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.


[Puppet Users] Re: mco package fail with puppet 3.4.0

2013-12-27 Thread Glenn Poston
I can confirm this.  Same issue here.

On Thursday, December 26, 2013 1:54:33 PM UTC-5, Fabrice Bacchella wrote:
>
> When I upgrade a node with puppet 3.4.0, puppet-package is broken : 
>
> ~$  mco package puppet status -I $(facter hostname) -v 
>
>  | [ > ] 0 / 1 
> The package application failed to run, use -v for full error details: 
> undefined class/module Puppet:: 
>
> undefined class/module Puppet:: (ArgumentError) 
> from /usr/libexec/mcollective/mcollective/security/psk.rb:27:in 
> `load'  < 
> from /usr/libexec/mcollective/mcollective/security/psk.rb:27:in 
> `decodemsg' 
> from /usr/lib/ruby/site_ruby/1.8/mcollective/message.rb:182:in 
> `decode!' 
> from /usr/lib/ruby/site_ruby/1.8/mcollective/client.rb:93:in 
> `receive' 
> from /usr/lib/ruby/site_ruby/1.8/mcollective/client.rb:152:in 
> `req' 
> from /usr/lib/ruby/site_ruby/1.8/mcollective/client.rb:151:in 
> `loop' 
> from /usr/lib/ruby/site_ruby/1.8/mcollective/client.rb:151:in 
> `req' 
> from /usr/lib/ruby/1.8/timeout.rb:67:in `timeout' 
> from /usr/lib/ruby/site_ruby/1.8/mcollective/client.rb:148:in 
> `req' 
> from /usr/lib/ruby/site_ruby/1.8/mcollective/rpc/client.rb:851:in 
> `call_agent' 
> from /usr/lib/ruby/site_ruby/1.8/mcollective/rpc/client.rb:244:in 
> `method_missing' 
> from 
> /usr/libexec/mcollective/mcollective/application/package.rb:63:in `send' 
> from 
> /usr/libexec/mcollective/mcollective/application/package.rb:63:in `main' 
> from /usr/lib/ruby/site_ruby/1.8/mcollective/application.rb:285:in 
> `run' 
> from /usr/lib/ruby/site_ruby/1.8/mcollective/applications.rb:23:in 
> `run' 
> from /usr/bin/mco:20 
>
> I'm running this on a up to date scientific linux 6.4 with up to date 
> mcollective and puppet rpm directly from puppet labs : 
>
> ~# rpm -qa | grep -e puppet -e mcollective 
> mcollective-package-agent-4.2.0-1.noarch 
> mcollective-package-client-4.2.0-1.noarch 
> mcollective-2.2.4-1.el6.noarch 
> puppet-3.4.0-1.el6.noarch 
> puppetlabs-release-6-7.noarch 
> mcollective-common-2.2.4-1.el6.noarch 
> mcollective-client-2.2.4-1.el6.noarch 
> mcollective-puppet-common-1.6.0-1.noarch 
> mcollective-puppet-client-1.6.0-1.noarch 
> mcollective-package-common-4.2.0-1.noarch 
>

-- 
You received this message because you are subscribed to the Google Groups 
"Puppet Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to puppet-users+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/puppet-users/3d015d82-1b2e-458b-a856-7d8d6f6e0ec6%40googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.