Hi,
How to include the class of module in the manifests. Please find below
the details of file location. The puppet location is /etc/puppet/
In that location I have manifests and modules folder.
In modules, I have module "abc" in which I have xyz manifest.
xyz has following code located in /etc/p
mv
/usr/lib/ruby/site_ruby/1.8/puppet/util/instrumentation/listeners/process_name.rb
/usr/lib/ruby/site_ruby/1.8/puppet/util/instrumentation/listeners/process_name.rb.bck
works for me too
facter puppetversion rubyversion virtual lsbdistdescription
lsbdistdescription => CentOS release 5.7 (Final)
Got it. Solved the problem.
On Feb 3, 1:19 pm, sateesh wrote:
> Hi,
>
> How to include the class of module in the manifests. Please find below
> the details of file location. The puppet location is /etc/puppet/
>
> In that location I have manifests and modules folder.
> In modules, I have module
Hi,
On 02/01/2012 02:01 AM, Richard K. Miller wrote:
> AppArmor is
> on. Could that be a factor?
I certainly believe so. It should be investigated.
Regards
--
You received this message because you are subscribed to the Google Groups
"Puppet Users" group.
To post to this group, send email to p
Hi,
On 02/01/2012 06:01 AM, sateesh wrote:
> Hi,
>
> Can we get the list of IPs from server where the specified module is
> installed. There is a search functionality in the Chef to do this. Is
> there any way in puppet?
not that I'm aware of, no.
There are some simple workarounds you might con
Hi,
On 02/03/2012 06:38 AM, sateesh wrote:
> The next step is now I need to install some modules that are located
> in the server on to newly created IP. I think in chef the server will
> copy on to the new VM in /tmp location and install the modules on that
> IP. After installing it will delete t
Hello,
Consider the following interfaces and addresses:
# ip a l
1: lo: mtu 16436 qdisc noqueue state UNKNOWN
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: eth
Hi Henrik,
looks great! Discovered lots of warnings and even an error (which Puppet did
not complain about) after upgrading. Fixed all my Puppet code now and I think
this is another good step in improving my overall code quality.
Thanks for the work. Looking forward for new releases.
Bernd
>
Hmmm that is what I have been doing but for some reason it keeps
messing up. Do I have to do an include or something for the puppet-lvm
module? I mean I already have an import statement for it in my
sites.pp.
On Feb 2, 4:51 pm, jcbollinger wrote:
> On Feb 2, 1:53 pm, Luke wrote:
>
>
>
>
>
>
>
>
Hi,
I have installed puppet using gem. Version it is installing is 2.7.10.
I want to set up the client server architecture. So when I am running
the "puppet agent --server --test", it is giving me the
error
"err: Could not request certificate: Connection refused - connect(2)
Exiting; failed to
Hi,
On 02/03/2012 01:33 PM, Luke wrote:
> Hmmm that is what I have been doing but for some reason it keeps
> messing up. Do I have to do an include or something for the puppet-lvm
> module? I mean I already have an import statement for it in my
> sites.pp.
please share a relevant excerpt from you
On Jan 5, 9:42 pm, Aaron Schaefer wrote:
> On Jan 5, 6:19 pm, Adam Gibbins wrote:
>
> > The recommendation for mirroring debian repositories is generally to use
> > something like apt-mirror or apt-cacher-ng. Is there a reason you can't do
> > this?
>
> Our local mirror server is running CentOS,
Hello,
I would like to use pip to install some python modules. The problem is
that I want to keep all my stuff isolated.
I saw this https://projects.puppetlabs.com/issues/7286 about
virtualenv support.
Anyone knows at what stage that is? Any other solution for virtualenv
and pip?
Best regards,
I am attempting to implement puppet for a server block of 20. Have set
it up and used for 2 months now. All of a sudden i get SSL problems.
Here is what ive done:
Server - removed Server SSL directory completely(/var/lib/puppetmaster/ssl).
Server(Client) - removed Client SSL directory completely(/
How, please ?
“Sometimes I think the surest sign that intelligent life exists elsewhere in
the universe is that none of it has tried to contact us.”
Bill Waterson (Calvin & Hobbes)
- sateesh wrote:
> Got it. Solved the problem.
>
> On Feb 3, 1:19 pm, sateesh wrote:
> > Hi,
> >
> > How to
Hi,
site.pp
manifests]$ cat site.pp
import "nodes"
import "modules"
manifests]$ cat modules.pp
import "lvm"
cat nodes.pp
node 'luketest.mgmt.mydomain.local' {
lvm::volume {'setvolume':
vg => 'myvg',
pv => '/dev/sdb',
fstype => 'ext3
Hi,
On 02/03/2012 02:37 PM, Luke wrote:
> When I name the module lvm. I applys the config but doesn't do
> anything.
> info: Caching catalog for luketest.mgmt.mydomain.local
> info: Applying configuration version '1328241141'
> notice: Finished catalog run in 0.03 seconds
please repeat with --eva
Hi,
On 02/03/2012 01:48 PM, sateesh wrote:
> "err: Could not request certificate: Connection refused - connect(2)
are you absolutely certain the default port is open on your master? It
sure doesn't look like it.
> Exiting; failed to retrieve certificate and waitforcert is disabled"
If you don't
[root@luketest ~]# puppet agent --test --evaltrace
info: Caching catalog for luketest.mgmt.mydomain.local
info: Applying configuration version '1328245279'
info: /Schedule[puppet]: valuated in 0.00 seconds
info: /Filebucket[puppet]: valuated in 0.00 seconds
info: /Schedule[never]: valuated in 0.00
This is the module here that I am trying to get working:
https://github.com/puppetlabs/puppet-lvm
On Feb 3, 9:51 am, Luke wrote:
> [root@luketest ~]# puppet agent --test --evaltrace
> info: Caching catalog for luketest.mgmt.mydomain.local
> info: Applying configuration version '1328245279'
> inf
On 02/03/2012 02:51 PM, Luke wrote:
> info: /Whit[/dev/sdb]: valuated in 0.00 seconds
> info: /Whit[myvg]: valuated in 0.00 seconds
> info: /Whit[mylv]: valuated in 0.00 seconds
> info: /Whit[/dev/myvg/mylv]: valuated in 0.00 seconds
Looks like it's working all right. But seeing as the LV exists a
Hi Felix,
Thats the thing the LV doesn't exist so I don't know why it is acting
like it does:
Disk /dev/sda: 10.7 GB, 10737418240 bytes
255 heads, 63 sectors/track, 1305 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Device Boot Start End Blocks Id System
/dev
It doesn't matter what I put in it does the same thing:
debug: /Stage[main]//Node[luketest.mgmt.mydomain.local]/
Lvm::Volume[setvolume]/Filesystem[/dev/awesomevg/awesomename]/require:
requires Logical_volume[awesomename]
debug: /Stage[main]//Node[luketest.mgmt.mydomain.local]/
Lvm::Volume[setvolum
Hi,
On 02/03/2012 03:04 PM, Luke wrote:
> Disk /dev/sdb: 8589 MB, 8589934592 bytes
> 255 heads, 63 sectors/track, 1044 cylinders
> Units = cylinders of 16065 * 512 = 8225280 bytes
>
> Disk /dev/sdb doesn't contain a valid partition table
>
> Its not even doing anything with /dev/sdb.
What makes
These modules work fine for what I need...
define virtualenv($ensure = 'present', $executable=false,
$relocatable=false,
$extra_search_dir=false, $site_packages=false,
$never_download=false, $prompt=false, $user='root') {
$executable_opt = $executable ? { false => '', default => $executab
Could you please provide an example of using the modules to install
something
Luis
On Feb 3, 3:31 pm, Michael Cumings
wrote:
> These modules work fine for what I need...
>
> define virtualenv($ensure = 'present', $executable=false,
> $relocatable=false,
> $extra_search_dir=false, $site_package
On Feb 2, 4:42 pm, Jo Rhett wrote:
> >> On Jan 26, 2012, at 6:19 AM, jcbollinger wrote:
> >>> For the most part, I think this reflects the difficulty of the
> >>> underlying problem more than any inadequacy of Puppet. If multiple
> >>> independent subsystems place different demands on the same
Here you go
root@luketest ~]# pvdisplay
/dev/hdc: open failed: No medium found
--- Physical volume ---
PV Name /dev/sda2
VG Name VolGroup00
PV Size 9.90 GB / not usable 22.76 MB
Allocatable yes (but full)
PE Size (KByte) 32768
On Feb 3, 7:37 am, Luke wrote:
> Hi,
>
> site.pp
>
> manifests]$ cat site.pp
> import "nodes"
That import is appropriate.
> import "modules"
That one should not be needed.
>
>
> manifests]$ cat modules.pp
> import "lvm"
That whole manifest should not be need.
> -
Hi John,
I did exactly what you said and got the same result:
debug: Creating default schedules
debug: Loaded state in 0.00 seconds
debug: /Stage[main]//Node[luketest.mgmt.mydomain.local]/
Lvm::Volume[setvolume]/Volume_group[myvg]/require: requires
Physical_volume[/dev/sdb]
debug: /Stage[main]//N
On 02/03/2012 03:45 PM, Luke wrote:
> root@luketest ~]# pvdisplay
Ugh, why do people insist on using this instead of pvs?
Anyway, you're right, the LV isn't there. Neither is the PV.
Are the PV and VG defined somewhere in your manifest? They should be. I
have no experience with this module, but
Hi folks,
I noticed an interesting problem with the nagios_* providers especially in
Debian. Besides writing to the wrong file (I fixed that issue) I've
noticed there is a umask issue where the config files end up being owned by
root with perms 0640. This causes nagios to spit milk out of its no
The trick is to export a file resource which matches the target
parameter of the Nagios_ type. The exported file resource should
require the Nagios_ resource so everything works upon collection. I'm
actually writing an article for Linux Journal on this exact issue :)
Adam
--
You received this me
On 12-02-03 10:54 AM, Peter Berghold wrote:
> I noticed an interesting problem with the nagios_* providers especially
> in Debian. Besides writing to the wrong file (I fixed that issue) I've
> noticed there is a umask issue where the config files end up being owned
> by root with perms 0640. This
The trick is to export a file resource which matches the target
parameter of the Nagios_ type. The exported file resource should
require the Nagios_ resource so everything works upon collection. I'm
actually writing an article for Linux Journal on this exact issue :)
Adam
--
You received this me
Just started a rollout of centos 6.x across our Puppet deployment
(100-odd servers).
what fact would people suggest I use to distinguish 5.x from 6.x
(quite a lot of subsystems are different between major releases)?
lsb* facts don't seem to be present on centos 6 - is this an EPEL bug,
or have th
This sort of thing works satisfactorily if cumbersomely for me:
case $::operatingsystemrelease {
/^5/: {
}
/^6/: {
}
}
On Fri, Feb 03, 2012 at 04:57:34PM +, Dick Davies wrote:
> Just started a rollout of centos 6.x across our Puppet deployment
> (100-odd servers).
>
> what fact would peopl
On Fri, Feb 3, 2012 at 8:57 AM, Dick Davies wrote:
> Just started a rollout of centos 6.x across our Puppet deployment
> (100-odd servers).
>
> what fact would people suggest I use to distinguish 5.x from 6.x
> (quite a lot of subsystems are different between major releases)?
>
They'll be there i
You'll need to add the redhat-lsb package to your kickstart system and/
or just install it on your current systems. That's the package facter
uses to determine the lsb facts.
On Feb 3, 8:57 am, Dick Davies wrote:
> Just started a rollout of centos 6.x across our Puppet deployment
> (100-odd serve
The PV and VG are not defined anywhere else that I can see.
On Feb 3, 11:40 am, Felix Frank
wrote:
> On 02/03/2012 03:45 PM, Luke wrote:
>
> > root@luketest ~]# pvdisplay
>
> Ugh, why do people insist on using this instead of pvs?
>
> Anyway, you're right, the LV isn't there. Neither is the PV.
>
Just to have it out in the open:
Here is the manifest from the lvm module
define lvm::volume($vg, $pv, $fstype = undef, $size = undef, $ensure)
{
case $ensure {
#
# Clean up the whole chain.
#
cleaned: {
# This may only need to exist once
if ! defined(Physical_volume[
On 02/03/2012 06:37 PM, Luke wrote:
> if ! defined(Physical_volume[$pv]) {
> physical_volume { $pv: ensure => present }
> }
Ah, dreadful ;-)
But there goes that theory - the PV and VG are implicitly created, so
the module really *should* do the right thing.
So the issue is pr
Oh well. Maybe this lvm module doesn't like centos or something :(
Thanks for all your help Felix. If anyone else has any ideas or better
lvm type modules please drop a line.
On Feb 3, 1:42 pm, Felix Frank
wrote:
> On 02/03/2012 06:37 PM, Luke wrote:
>
> > if ! defined(Physical_volume[$pv]
Hi Nan,
Tks much for your help, does it mean that every time a need to create
a custom resource my ruby file must be present at /var/opt/lib/pe-
puppet/lib/puppet/type and /opt/puppet/share/puppet/modules/stdlib/lib/
puppet/type/ilegra.rb?
I did a test, when I keep my file at /opt/puppet/share/pu
I'd like to start using exported resources, so I see I need to turn on
stored configurations. I'm already running the inventory service, and
it looks like there is a certain degree of overlap between what the
inventory db is storing and what stored configs is storing - at least
as far as host facts
On Fri, Feb 3, 2012 at 11:38, George wrote:
> I'd like to start using exported resources, so I see I need to turn on
> stored configurations. I'm already running the inventory service, and
> it looks like there is a certain degree of overlap between what the
> inventory db is storing and what sto
On Fri, Feb 3, 2012 at 04:18, lfrodrigues wrote:
> I would like to use pip to install some python modules. The problem is
> that I want to keep all my stuff isolated.
>
> I saw this https://projects.puppetlabs.com/issues/7286 about
> virtualenv support.
>
> Anyone knows at what stage that is? Any
On Fri, Feb 3, 2012 at 04:05, Jure Pečar wrote:
> Consider the following interfaces and addresses:
>
> # ip a l
> 1: lo: mtu 16436 qdisc noqueue state UNKNOWN
> link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
> inet 127.0.0.1/8 scope host lo
> inet6 ::1/128 scope host
> vali
Hi,
How can I create two nodes with differnert hostnames i.e, nodeSD.pp &
nodesCTO.pp & import that nodes
in site.pp with different classes (class1 & class2) such as,
classes1 with nodesSD.pp
classes2 with nodesCTO.pp
Both the classes has different configuration & need to be run with
different
I'm new to puppet and would like to read variables into my manifest
from the puppet dashboard.
I'm would like to create solaris zones from the dashboard. My scenario
is:
>From dashboard add node with parameter set for ip.
I know that the /var/log/pe-puppet-dashboard/production.log has the
infor
Under Solaris: when a package that I wish to insure the presence of
does not exist, apply fails with the following message:
err: /Stage[main]/Lsof::Lsof::Solaris/Package[SMClsof]: Could not
evaluate: Unable to get information about package SMClsof because of:
["ERROR: information for \"SMClsof\" w
On 01/20/2012 12:51 PM, James Lee wrote:
> I've narrowed down what is triggering this problem, but I still do not
> know how to fix it.
...
>> acad ~ # puppet agent --test --environment=jameslee --no-report --noop
>> /opt/csw/lib/ruby/gems/1.8/gems/puppet-2.6.12/lib/puppet/provider/package/apt.rb:
52 matches
Mail list logo