Hi!

Again thank you for the detailed answer.

> One of the things that sometimes gives Puppet newbies trouble is that
> it employs a declarative model rather than an imperative one.  In
> other words, it's an expert system, not a scripting language.  The
> Puppet language is all about describing the state that your system
> should have, and Puppet later uses that description to ensure that
> your system is in that state.  This is very powerful and flexible, but
> sometimes confusing, too.  It is the core reason why you cannot
> declare duplicate resources.

I think I got all that.
I have one problem though. Even spelling it all out won't work. Or to
put it better: I don't know how.
And add to that that we're changing our infrastructure quite
frequently at the moment so I'd have to switch back and forth quite a
lot of code if I got it to work.

What I don't understand:
I have an array $disks = ["/a", "/b"]

And I can use that as the title of resources to define one resource
for each member of the array. So far so good.

file { $disks: }
works as expected. And is expanded to:
file { "/a": }
file { "/b": }
which are two distinct titles so work as expected. But those aren't
the paths I need to manage.

But what I want is:
file { "${disks}/foo": }

Being expanded to:
file { "/a/foo": }
file { "/b/foo": }

What really happens:
file { "/a/b/foo": }

It's obvious to me why this happens (the variable being in a string
etc.) but I'd still love a way to allow me to do what I want because I
think that would solve all my problems (I might be wrong here
obviously).

As I said earlier: I'm at a point where I don't know how to solve my
problem even if I am as verbose as I can.
I've just pushed my current (non-working) configuration to Github:
https://github.com/lfrancke/gbif-puppet/blob/master/modules/hadoop/manifests/init.pp#L22-58

I've spelled out all the main directories here in a virtual resource
definition but I have no idea how to go on from there. Some machines
need only two of those, some need all. The datanode[1] and namenode[2]
classes need subdirectories "/dfs" everywhere and again this is with
two different configurations (2 vs. 6 disks) and the tasktracker[3]
and jobtracker[4] need a "/mapreduce" directory in there. Because
those classes all need the common super directory, that's why I made
it virtual. But how Do I realize it? I can't just list the
realizations in the classes because they are different depending on
the node.

If you could take another look at it then that would be great but I'd
understand if you have better things to do with your time. I've fallen
back to manually creating that directory structure using fabric now.
But I'd obviously like to get it working in Puppet to avoid this
manual step.

I'm afraid I'm just a hopeless case ;-)

Cheers,
Lars

[1] 
https://github.com/lfrancke/gbif-puppet/blob/master/modules/hadoop/manifests/datanode.pp
[2] 
https://github.com/lfrancke/gbif-puppet/blob/master/modules/hadoop/manifests/namenode.pp
[3] 
https://github.com/lfrancke/gbif-puppet/blob/master/modules/hadoop/manifests/tasktracker.pp
[4] 
https://github.com/lfrancke/gbif-puppet/blob/master/modules/hadoop/manifests/jobtracker.pp

-- 
You received this message because you are subscribed to the Google Groups 
"Puppet Users" group.
To post to this group, send email to puppet-us...@googlegroups.com.
To unsubscribe from this group, send email to 
puppet-users+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/puppet-users?hl=en.

Reply via email to