On Tuesday, October 2, 2012 9:35:13 AM UTC-5, Mike wrote:
>
> Hi,
>
> I've have some problems with my puppet configuration, I'm managing several 
> Ubuntu and OpenBSD hosts.
>
> I sometimes get on OpenBSD hosts (5.0 is OpenBSD release):
> info: Retrieving plugin
> err: Could not retrieve catalog from remote server: Error 400 on SERVER: 
> Could not find template 'ubuntu-common/5.0/etc-openntpd-ntpd.conf.erb' at 
> /usr/share/puppet/modules/ubuntu-common/manifests/base.pp:7 on node xxx
> warning: Not using cache on failed catalog
> err: Could not retrieve catalog; skipping run
>
> But this is in an Ubuntu template file, OpenBSD should never include this 
> file, according to the case $operatingsystem in site.pp.
>
> I tracked the exact scenario which make it fail:
>
> 1. Restart puppet master - run agent only with OpenBSD hosts - it never 
> fails, everything works as expected
> 2. Restart puppet master - run agent only with Ubuntu hosts - it never 
> fails, everything works as expected
> 3. Restart puppet master - run agent with Ubuntu and OpenBSD hosts - 
> Ubuntu works as expected, OpenBSD fails with the above error message (after 
> the first Ubuntu agent was connected)
> 4. after scenario 3. on puppet config file changes, OpenBSD works again 
> until an Ubuntu agent connects to the master
>
> Do you have an idea of what could be wrong, or an known issue of puppet?
>


I am not aware of any known Puppet issue that results in behavior 
comparable to this.

 

>
> Could it be a puppet version issue, the master and Ubuntu hosts are using 
> 2.7.19, OpenBSD hosts are using 2.7.1? (I haven't tried to downgrade ubuntu 
> or upgrade OpenBSD's version, newer OpenBSD 5.1 comes with puppet 2.7.5)
>


That should not be an issue.  If the master and all the clients are on 
2.7.x, and none of the clients use newer Puppet than the master, then you 
should be fine.  You should be ok even with 2.6.x clients.

 

>
> According to scenario 4. could it be a caching issue on the master?
>


It rather looks like one, but the key question is why would such an issue 
arise?  Puppet should never use cached details of one node's catalog to 
build a different node's catalog.  Although I can't rule out a Puppet bug, 
it more likely arises from a problem in your manifests or client 
configuration.


> I tried some puppet.conf options:
>
> ignorecache=true
> usecacheonfailure=false
>
> but it didn't change anything.
>


Those options affect only the agent as far as I know, not the master.
 

>
> The master and ubuntu hosts are running on Ubuntu 12.04
> # puppet --version
> 2.7.19
> # ruby --version
> ruby 1.8.7 (2011-06-30 patchlevel 352) [x86_64-linux]
>
> The OpenBSD hosts are OpenBSD 5.0
> # puppet --version
> 2.7.1
> # ruby18  --version
> ruby 1.8.7 (2011-06-30 patchlevel 352) [x86_64-openbsd]
>
> Here is my very simplified version of puppet files, I didn't try exactly 
> this subset of configuration, I will try to narrow down the problem to the 
> simplest configuration.
>


Often the process of narrowing it down will itself reveal the problem.  I 
don't see why the manifest set you presented would exhibit a problem such 
as the one you described, and since you didn't confirm that it does, I'm 
not going to do further analysis.

My only guess at this point is that something is screwy with your SSL 
configuration.  Puppet identifies nodes by the certname on the client 
certificates they present, so if you've done something like sharing the 
same client cert among all your clients then the master will not recognize 
them as different nodes.  Node facts (including $hostname) do not enter 
that picture.


John

-- 
You received this message because you are subscribed to the Google Groups 
"Puppet Users" group.
To view this discussion on the web visit 
https://groups.google.com/d/msg/puppet-users/-/fMZeUmBcZJgJ.
To post to this group, send email to puppet-users@googlegroups.com.
To unsubscribe from this group, send email to 
puppet-users+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/puppet-users?hl=en.

Reply via email to