I went to update our ruby-1.8 installation and unfortunately, it
appears to have broken something that Puppet depends on:
# service puppetmaster start
Starting puppetmaster: /usr/lib/ruby/site_ruby/1.8/rubygems.rb:334:
warning: parenthesize argument(s) for future version
/usr/lib/ruby/site_ruby/1
Nevermind, I accidentally built in the wrong directory an older
version.
--
You received this message because you are subscribed to the Google Groups
"Puppet Users" group.
To post to this group, send email to puppet-users@googlegroups.com.
To unsubscribe from this group, send email to
puppe
I'm attempting to deploy puppet via an NFS share. It's on a local-only
network, and it will contain only ruby (gems) and whatever is needed.
Seems simple enough, but tonight I am having an issue with this error:
# service puppet start
Starting puppet: /local/lib/ruby/site_ruby/1.8/rubygems/
custo
> This is probably not an issue with the executable search path, but
> rather with the Ruby path. It looks like whichever Ruby interpreter
> you are using to run Puppet is unable to find one of the files
> (probably openssl.rb) that it expects to see in the Ruby library. The
> Ruby interpreter i
This must have to do with an include path, as here is where I find the
code:
/local/lib/ruby/site_ruby/1.8/rubygems/gem_openssl.rb
it's been a while, but I think the SITE_RUBY include is configured
somewhere - and that may be the issue.
--
You received this message because you are subscribed to
I just did a basic find statement and found:
# pwd
/local/lib/ruby/gems/1.8/gems/puppet-2.7.11
# find . -exec grep -i site_ruby {} \;
SITELIBDIR="/usr/lib/ruby/site_ruby/1.8"
sitelibdir = $LOAD_PATH.find { |x| x =~ /site_ruby/ }
sitelibdir = File.join(libdir, "site_ruby")
I built Ruby with:
./configure --prefix=/local
I see that you can specify a number of options with ./configure,
including:
--with-sitedir=DIR site libraries in DIR [LIBDIR/ruby/site_ruby]
but the defaults for this should just work with that switch. I see
these options, too:
--exec-pre
Also, the puppet I'm using (on all systems) is installed from the gem.
--
You received this message because you are subscribed to the Google Groups
"Puppet Users" group.
To post to this group, send email to puppet-users@googlegroups.com.
To unsubscribe from this group, send email to
puppet-user
I installed this using the full path to the shared gem, simply:
/local/bin/gem install puppet
The puppet code, et al, is in the proper place under /local/lib/
I don't need to worry about the installation, I can overwrite it any
time -- this is my test platform.
I did bootstrap ruby and inst
Here are the errors in full, and demonstrating the entire path:
[ /local]# bin/gem list
*** LOCAL GEMS ***
facter (1.6.5)
puppet (2.7.11)
[ /local]# bin/puppet --version
/local/lib/ruby/site_ruby/1.8/rubygems/custom_require.rb:36:in
`gem_original_require': no such file to load -- openssl (Lo
>
> Did you build this on a system with openssl-devel or equivalent installed? Or
> install openssl+devel headers into your /local location?
BINGO! That was the problem. THANK YOU.
What really gets me is that I would never have figured this out, based
on the ambiguous error message.
I just
I have Puppet mounted to a couple of machines via NFS (along with
Ruby) -- and it works fine. Except, I just noticed that it fools
Facter in to believing it's a physical machine, when in fact one of
them is a VMware host.
We don't really use this setting, but I'm concerned other settings
might n
I'm not sure I understand his setup, or what he means by "minimal
install".
My environment on the VMware image is CentOS 5.7, it is a full release
and the NFS mount contains a full release of Puppet and Ruby 1.8.x.
Perhaps there's something that Facter gets wrong when it's being
called from a non
Nothing was copied over. The NFS mount code was built and then
exported; 32- and 64-bit respectively. The code was built from
scratch and installed with the appropriate locally mounted prefix (in
this case, /local).
I'm on RHEL 5.x and we only have /proc/self/status which doesn't seem
to indicat
Interestingly, the command "facter serialnumber" correctly pulls that
it's a VMware system:
# /local/bin/facter serialnumber
VMware-56 4d 00 7e e8 3b e8 c9-85 7f 4e XX XX XX XX XX
--
You received this message because you are subscribed to the Google Groups
"Puppet Users" group.
To post to this
On another system, same NFS mounts, the "facter virtual" reports the
correct information, that system is running:
2.6.18-194.3.1.el5
The system that doesn't correctly report is:
2.6.18-274.18.1.el5
I don't know if that really matters.
--
You received this message because you are subscribed
I've not made any changes to the config recently, only keeping Puppet
and ruby 1.8 up-to-date. Recently, I noted my systems are logging
this ambiguous error:
Could not retrieve catalog from remote server: wrong header
line format
I say ambiguous as a google search shows serveral things t
I answered my own question. It seems there was a missing ' or " in
one of the configs -- reported in the HTTP log. But the error itself
doesn't tell me much.
Thanks.
--
You received this message because you are subscribed to the Google Groups
"Puppet Users" group.
To post to this group, send
I read that this doesn't always work on every OS. However, I'm on
RHEL, and from what I'm reading the following should correctly set /
etc/shadow:
@user { "myuser":
require=> Group['staff'],
ensure => present,
uid=> '2345',
gid=> '90',
I want to distribute a binary directory based upon whether the
"architecture" is 32- or 64-bit. It appears I cannot nest a case
statement under file, however this is what I was attempting to do:
file { "/usr/local/nagios/libexec":
require => File['/usr/local/nagios'],
Thank you, I appreciate it.Still learning all the interesting
nuances of this syntax. I'm not yet familiar with this $::
--
You received this message because you are subscribed to the Google Groups
"Puppet Users" group.
To post to this group, send email to puppet-users@googlegroups.com.
To
So, it's choking on this still at the line with the conditional:
Apr 17 18:58:17 test-system puppet-agent[7590]: Could not retrieve
catalog from remote server: Error 400 on SERVER: Could not parse for
environment production: Syntax error at '{'; expected '}' at /etc/
puppet/manifests/classes/nagio
So there were two gotchas :-) One, my mis-typed / and the other the
missing ? in the evaluation ;-)
Thanks again, guys, I appreciate the feedback.
--
You received this message because you are subscribed to the Google Groups
"Puppet Users" group.
To post to this group, send email to puppet-use
What is the status of compatibility with Puppet 2.7.16+ and Ruby
1.9? I searched through this group and found some older posts. I am
not certain what the core issues are (and there's probably a
PuppetLabs page for it, I bet).
Thanks.
--
You received this message because you are subscribed to
Thanks everyone for your replies. I've been waiting to upgrade our
infrastructure to 1.9.
We don't do any esoterica, it's just a simple puppet setup (for the
time being) - not even marionette.
I switched our config to use an NFS mount for puppet and ruby (configs
stored locally on each machine,
As I've gotten to the point of configuring server.cfg and client.cfg,
based on the documentation in Pro Puppet (which also references use of
RPMs), it seems we have some configuration issues -- perhaps about
standards of where things need to be located.
The book refers to a non-existent directory,
Because I'm using Enterprise Ruby I had to manually run "make rpm" and
use those. This is on RHEL5. It's just confusing when a book says
one thing, the dist another.
I also noticed that /usr/libexec/mcollective may have been installed
incorrectly by the RPM. It installs it as /usr/libexec/mcol
Where is the plugins directory supposed to be installed/located?
--
You received this message because you are subscribed to the Google Groups
"Puppet Users" group.
To post to this group, send email to puppet-users@googlegroups.com.
To unsubscribe from this group, send email to
puppet-users+unsu
I made an adjustment to my puppet config for three systems today,
which has the "remount" option set to true in the *.pp, and I see this
in the logs on my systems:
Execution of '/bin/mount -o remount /home/directory' returned 32:
mount.nfs: Invalid argument
"remount" is valid to the "mount" comma
I have a requirement to manage membership to groups in /etc/group.
These members do not need to be virtual users. I don't see a way to
do this through virtual users @group. How are others doing this?
--
You received this message because you are subscribed to the Google Groups
"Puppet Users"
wrote:
> The user type allows you to specify supplemental groups (see the groups
> parameter). Is that what you were looking for?
>
> http://docs.puppetlabs.com/references/stable/type.html#user
>
>
>
>
>
>
>
> On Mon, Oct 17, 2011 at 03:04:26PM -0700, Forrie
We aren't using LDAP yet...
On Oct 17, 6:50 pm, Christopher Wood
wrote:
> If you're using ldap, why not handle groups there?
>
>
>
>
>
>
>
> On Mon, Oct 17, 2011 at 03:48:33PM -0700, Forrie wrote:
> > I want to manage the membership of the /etc/g
I've read previous posts about iterating over arrays, hashes, etc.
I have a series of directories that need to be created (and
maintained, with appropriate permissions) that serve as NFS mount
points on a series of systems.Sometimes, when they are no longer
needed, they will be removed (anothe
Is there a way to just install the client component of the Puppet gem,
install of both on systems that don't need the server/master
component.
--
You received this message because you are subscribed to the Google Groups
"Puppet Users" group.
To post to this group, send email to puppet-us...@goog
Really. I think the *.gem is convenient. However, the args to
install.rb (puppet) don't seem to indicate one or the other (server/
client):
Usage: install.rb [options]
--[no-]rdoc Prevents the creation of RDoc
output.
Default on.
I guess I'm just a little surprised that install.rb doesn't have an
option to modify the installation, like ./install.rb --client ...
--
You received this message because you are subscribed to the Google Groups
"Puppet Users" group.
To post to this group, send email to puppet-us...@googlegroup
I can't find the ticket link.
FWIW: These are the files installed for the server component:
./etc/puppet
./etc/puppet/fileserver.conf
./etc/puppet/manifests
./etc/puppet/puppetca.conf
./etc/puppet/puppetd.conf
./etc/puppet/puppetmasterd.conf
./etc/rc.d/init.d/puppetmaster
./etc/sysconfig/puppetm
What is the disadvantage of using the puppet gem vs. installing from
source (install.rb)?
On Jun 15, 5:33 pm, Todd Zullinger wrote:
[ .. ]
> I'd highly recommend not using gems. :)
--
You received this message because you are subscribed to the Google Groups
"Puppet Users" group.
To post to thi
We have an environment that depends largely on a series of NFS mounts
on our application systems. Sometimes NFS mounts go away, things stop
working.
In the past, I've had problems with autofs (rpc storms) when things go
badly, and we're just beginning to get int Puppet (read: I'm learning
it).
Minor buglet: the *.spec file for linux needs to be updated for
2.6.1.
--
You received this message because you are subscribed to the Google Groups
"Puppet Users" group.
To post to this group, send email to puppet-us...@googlegroups.com.
To unsubscribe from this group, send email to
puppet-use
I'm just beginning with Puppet. One issue I've run into is updates.
As my nodes expand, updating each individual node via my manual method
becomes daunting.
Could someone share their recipe or procedure for using the puppet
master to distribute updates of puppet to the client nodes? I would
gue
That's more of what we're looking to do. I think it would be a bad
idea to have puppet automatically updating clients. This would need
to be a one-off, scheduled item you would trigger from the puppet
master server, under the default {} node, I would presume;
Perhaps having the puppet (and facte
I've noticed that events for puppet client are logged locally (I'm
just starting out with puppet). Is there a way to have these events
sent to a central parser so they might be easily parsed/sorted/acted-
upon?
I probably missed a configuration directive.
Thanks.
--
You received this message
Thanks for posting the code snippet.
We haven't been building RPMs for internal use; so, I certainly could
work on that. I think we might be able to do a filesystem tree copy
-- for example, track the files that get installed on the master
server, then copy those to the files repository under pu
I found a bug from April of 2010 referencing this, though I would
imagine this should be fixed by now. I'm having this problem with
1.0.4 after the upgrade.
I'm going to start all over again and see if I can repeat the error;
however, is this still a bug or did I mess up the installation I have?
This appears to be a problem with data that got stored in the database
during the migration as I was able to duplicate the error on a fresh
installation of 1.0.4.
--
You received this message because you are subscribed to the Google Groups
"Puppet Users" group.
To post to this group, send email
Facter will display the values associated with network_* specific
settings. Shouldn't there be a way to display all connected (active)
networks in one command?For example:
# facter networks
192.168.1.2
10.0.1.1
10.10.23.0
I could then formulate a conditional based on the available
networks
I'm somewhat new to Puppet. I'm trying to establish some Virtual
Resources so I can realize them based on a TAG.
The error I continue to get is:
Feb 16 18:02:38 test-fms puppet-agent[8590]: Could not retrieve
catalog from remote server: Error 400 on SERVER: Syntax error at
'201001'; expected '}
Thank you! That worked. I was actually following an incorrect
example whereby the definition itself was capitalized.
What really irks me here is that the error message I was getting was
irrelevant -- at least, in my opinion it needed to be more specific
with syntactical errors. There must be s
Thank you! That worked. I was actually following an incorrect
example whereby the definition itself was capitalized.
What really irks me here is that the error message I was getting was
irrelevant -- at least, in my opinion it needed to be more specific
with syntactical errors. There must be s
I'm trying to get a simple NFS mount to work with Puppet, using this:
[ init.pp ]
class myclass {
mount { "/home/directory":
device => "server.domain.com:/exportdir/directory",
fstype => "nfs",
ensure => "mounted",
options => "tcp,intr,hard,rw,bg,rsize=3276
.. is there a clever way I can do this for a multiple of
directories. Maybe from an array.
Thank you.
On Mar 3, 7:17 pm, Ben Hughes wrote:
> On Thu, Mar 03, 2011 at 03:11:40PM -0800, Forrie wrote:
> > I'm trying to get a simple NFS mount to work with Puppet, using this:
>
&g
It looks like you can't change the definition of "ensure => mounted"
to "ensure => absent" and have it automatically remove the managed
resource (mount point).We have a series of directories that are
used for all three terms, after we're done, we don't need the NFS
mounts or directories present
not a big deal, I don't mind editing them manually for now.
So are you saying for the "absent" items, we'll need to include a
file{} directive to remove the mount point, too?
Thanks again.
On Mar 3, 8:55 pm, Ben Hughes wrote:
> On Thu, Mar 03, 2011 at 05:49:58PM -0800, Forri
I manually created the mount points and incorrectly assumed the mount
process would automatically create the point if it didn't exist.
Once a managed mount point is no longer needed, we want the NFS mount
its directory removed from the client -- if we need them again, I can
just keep them commente
I manually created the mount points and incorrectly assumed the mount
process would automatically create the point if it didn't exist.
Once a managed mount point is no longer needed, we want the NFS mount
its directory removed from the client -- if we need them again, I can
just keep them commente
I decided to try and modify an existing NFS module (puppet-nfs) from
GitHub to suit my needs (rather than reinvent the wheel). I'm stuck
on the "ensure" property of "file" -- there is some discussion on the
net (bug reports) that this does not recursively create the directory
structure... hence:
I want to realize some virtual users on several systems, but I want
that to depend upon a certain group being present (and created if
needed) first. How can I do this?
The following works, because the group is being realized first.
Group <| title == mygroup |>
User <| title == m
I have some NFS directories that get changed to a read-only status
after a period of time. I noticed in puppet, if I change these
values and restart puppet and puppetmaster respectively, it does not
pick up the change in the mount {} directive.
Is there a way around this, or a better way to affe
Thanks for this information; this a somewhat a frustrating issue.
IMHO, as per configuration management, if the mount options change,
then Puppet should "do the right thing" and umount and remount with
the proper options. One problem there is what if someone is in the
directory mount and puppet c
I've read a couple of bug reports via Google about problems managing
directories recursively with Puppet. Another article suggests to
create a resource that points to an empty directory on the file store,
then use file resources to populate it (which I cannot get to work).
The common error being:
I only have a directory like:
/usr/local/nagios/libexec
for which I want to manage the plugins on the clients. It's pretty
simple.
So are you suggesting the better approach may be to exec a mkdir -p as
a requirement in the head of the *.pp as a dependency? Meaning, it
would detect if the dire
rver puppet-agent[28997]: Finished catalog run in
0.30 seconds
On Mar 31, 4:30 pm, Arnau Bria wrote:
> On Thu, 31 Mar 2011 13:19:21 -0700 (PDT)
>
> Forrie Forrie wrote:
> > I only have a directory like:
>
> > /usr/local/nagios/libexec
>
> > for which I want to
I found that the file struct under /etc/puppet/files was owned by root
(oops, fixed).
However,
in using this method outlined earlier, I'm still not able to get the
desired result:
> file { "/usr/local/nagios": ;
> "/usr/local/nagios/libexec":
> requires => File['
This actually seems to work better:
file { "/usr/local/nagios":
ensure => directory,
owner => 'root',
group => 'root',
mode => 655,
}
file { "/usr/local/nagios/libexec":
I've been working with a file of virtual users that I want to
"realize" on certain hosts. For one of these, I need an
authorized_keys file. After experimenting with the resource
ssh_authorized_key, I thought I could create a dependency relationship
like this:
Ssh_authorized_key <| title ==
On my test system, I noticed that (with virtual users) if you remove /
home/username, puppet doesn't realize there is a problem, as the
resource doesn't track the home directory.
When you userdel then it notices and creates everything as it should
be.
In a virtual user configuration, what's the b
Our shop is newly adopting puppet. Our number of nodes is growing
and my installation method is thus far manual and tedious. This will
change when/if we migrate to Puppet Enterprise.
My question is what's a best practice for managing puppet
installations on client nodes? Is it possible to se
>
> Sounds like a bug to me. A user with managehome => true but no home
> directory should not be in sync. You may want to report this (or vote on
> the bug if it's been reported already).
I wasn't able to find a bug similar to this based on the search
criteria, so I filed bug #7002.
>
> > In a
A while ago, I noticed a *.spec file in the puppet distribution - but
I think it was out of date. I could use that to distribute an RPM.
Curious, do you separate out the client/server portions for
installation or just install the whole thing on client systems.
This will be different for us when
I had to write up a quick *.pp to push out SSH keys for our nagios
user, while I work on a better solution for managing these. To my
surprise, I found multiples (100 or more?) of the same key in the
authorized_keys file, which is definitely wrong. I'm including the
simple code below -- can some
Thanks, this was the problem. Sounds like a bug to me.. ?
How can I go through my systems and remove all the 10's of redundant
SSH-DSS keys that have the comment in them? I dread doing that by
hand :-)
Thanks again.
On Apr 11, 5:12 pm, Patrick wrote:
> On Apr 11, 2011, at 1:40 PM
In our environ, there are several services that are deployed via an
NFS mount, so that the executables and configs are consistent across
the board.
Is there any reason why this couldn't be done with Puppet? For
example, each individual system would contain its own /etc/puppet and
rc.d and pid fi
Enterprise version
similarly).
On Apr 13, 2:56 pm, Mohamed Lrhazi wrote:
> Thats how we deployed to our Solaris hosts, ruby, puppet and
> mcollective, all from OpenCSW, all on a readonly mounted share
> "/opt/csw"
> Seems to work fine so far.
>
>
>
>
>
>
> have to tweak things here an there to make everything use the right
> paths, but it ought to work.
>
>
>
>
>
>
>
> On Wed, Apr 13, 2011 at 3:46 PM, Forrie wrote:
> > Other than local configs for each system, were there any other
> > issues. For o
We have resources that, from time to time, are selected to be removed
(unmanaged). When it comes to ssh keys, fstab... this leaves a lot
of stuff behind that we don't want. Is there a simple way to remove
the unmanaged data so we can keep the systems clean.
Thanks.
--
You received this messa
+1 here for this.
Having said that, I think mingling the setting in there with the two
basic types would be essential to keep this simple. All the heavy
lifting should go on automatically in the background. A sysadmin
should only have to change the mount to "rw" or "ro" and have Puppet
do the
have the option to edit it out. (yes I
know you can just do a file://)
* NFS mounts. When a mount is no longer managed, I'd like to also
remove the mount point under certain conditions.
On Apr 19, 9:18 pm, Ben Hughes wrote:
> On Tue, Apr 19, 2011 at 11:38:35AM -0700, Forrie wrote:
&
This type seems like it could have unwanted side effects. I'd prefer
there be a way to specify the resource I want purged, instead of just
all or nothing.
On Apr 21, 2:36 pm, Nigel Kersten wrote:
> On Thu, Apr 21, 2011 at 11:21 AM, Forrie wrote:
> > That only removes the mana
Fair enough :-) In my case, we have NFS mounts that rotate out --
and I dread having to manually prune the many systems we use. Puppet
will handle /etc/fstab, but I want to remove the mountpoint as well.
On Apr 21, 4:29 pm, Nigel Kersten wrote:
> On Thu, Apr 21, 2011 at 1:24 PM, For
How do you handle the *.rpm prerequisites of puppet itself. If one
installs (deploys) puppet on an NFS mount, presumably you would also
include enterprise-ruby (or standard) with those dependencies there.
Enterprise Ruby seems to have rolled their own rpms, prefixed with
"pe-".
I suppose you'll
I did it on Solaris and it works fine.
>
> > On Mon, Apr 25, 2011 at 9:36 PM, Mohamed Lrhazi wrote:
> >> there should be dependencies for REE.. is all goes under
> >> /opt/ruby-enterprise.
>
> >> On Mon, Apr 25, 2011 at 5:13 PM, Forrie wrote:
> >
interesting to have the configs on NFS, but I
don't know if that would scale very well.
Thanks!
On Apr 26, 8:13 pm, Dominic Maraglia wrote:
> Hello,
>
> On 4/25/11 2:13 PM, Forrie wrote:
>
> > How do you handle the *.rpm prerequisites of puppet itself. If one
> >
Also, do you just install the puppet from the src or include the GEM
in the ruby distribution. I would think managing it from src might
be better, if it's just an NFS RO mount point.
--
You received this message because you are subscribed to the Google Groups
"Puppet Users" group.
To post to t
ve a local /etc/init.d/puppet script. So why bother trying
to store the localized items on NFS... ?
On Apr 29, 12:14 pm, Dominic Maraglia wrote:
> On 4/28/11 11:57 AM, Forrie wrote:
>
> > Thanks for the feedback.
>
> > It will make life a lot easier if I can deploy/maintai
Is anyone currently using Puppet to manage and/or rollout Adobe FMS?
If so, I'd be interested knowing about your config, approach, etc.
I can see where the configs can be templated. Applications installed
separately (ours are complicated with configs).If you install the
base code on a local
We are using Enterprise Ruby - I want to dig into Mcollective;
however, there is no installation script in the *.tgz. Also, the
rpms have various requirements from the stock distribution (RHEL5 in
my case) that are already present under our Enterprise Ruby
installation.
Is there some c
>
> you should have good milage with more or less:
>
> git clone
> git checkout 1.2.0
> rake rpm
>
> if your /usr/bin/env ruby does the right thing, you should end up with
> rpms that are built for your ruby
This still leaves the dependencies issue with rubygems-stomp, ruby,
etc., all of which
> not sure if there's a way to tell rpmbuild to do that, but you can
> just comment them out in the spec file. or when you install you can
> do rpm -ivh --nodeps to skip them.
>
> There's a ticket open to make this kind of thing easier - being able
> to rebuild the rpms for different rubies instal
My PATH is set with the /opt/ruby/bin pointer in the front. When I
run /usr/bin/env on *any* of the many RHEL5 systems I have, it just
hangs. Some of these I didn't set up, some are just stock RHEL
installs. So I don't know whether this is a configuration problem or
not.
By install script, I m
I think I presumed /usr/bin/env returned something. It turns out, it
just executes the ruby binary. So that part is working.
I now need to sort out the installation issue, mentioned above.
Would it be feasible to have an install.rb that you could pass flags
to for:
* common
* server
* client
*
Point taken.
Back to the RPM issue. For those of us that do not use a standard
system-provided ruby installation, such as Enterprise Ruby, how can we
mitigate the installation so that it works?
As I mentioned earlier, I tried this with --nodeps and the ruby script
could not "require" the mcolle
Here's the output from the mcollective rpm install:
# rpm -ihv --nodeps *.rpm
Preparing...
### [100%]
1:mcollective-common
### [ 33%]
2:mcollective
### [ 67%]
3:mcollec
Alright, so when I install it this way, I still get errors, which
indicates the installation may not be correct. Here's a log:
# rpm -ihv --nodeps *.rpm
warning: mcollective-1.2.0-5.el5.noarch.rpm: Header V3 RSA/SHA1
signature: NOKEY, key ID 4bd6ec30
Preparing...
##
I verified that this is not getting confused with /usr/bin/ruby, which
I removed also as a part of that test and removed/reinstalled
mcollective.
A "gem" would fit within the ruby paradigm.
--
You received this message because you are subscribed to the Google Groups
"Puppet Users" group.
To pos
Indeed, i was using the supplied RPMs. The ones I built seem to be
doing something, tho I need to get RabbitMQ running first before I can
test it. This error came up in the RPM build:
lib/mcollective/log.rb:83:79: Skipping require of dynamic string:
"mcollective/logger/#{logger_type.downcase}_l
Is there a way to deploy/install only the client portion of puppet?
I looked through install.rb and didn't see any specific options,
though I seem to recall another Linux dist that separated them out.
It's possible I'm approaching this incorrectly, though it doesn't make
a lot of sense to install
Puppet docs require a PUPPET server name -- which I presumed a CNAME
would suffice. However, I'm finding that's not the case - as the SSL
cert generated is for the actual system name pupptmasterd runs on
(makes sense).
The server that puppetmasterd is running on services other purposes,
and I don
I read somewhere recently about problems with Puppet and Ruby 1.9.
I'm wondering if this is still an issue?
Thanks.
--
You received this message because you are subscribed to the Google Groups
"Puppet Users" group.
To post to this group, send email to puppet-us...@googlegroups.com.
To unsubscr
Why is that I can only obtain certain variables from facter via the
command line, though it will dump everything without an arg?
For example, I'm on a ProLiant running Redhat 5... facter (running as
root) will dump out all its variables, but if I try to do:
# facter productname
it returns nothin
1 - 100 of 172 matches
Mail list logo