On 9/17/2010 11:03 PM, Luke Kanies wrote:
On Sep 17, 2010, at 4:41 AM, David Schmitt wrote:
On 9/16/2010 1:26 AM, Luke Kanies wrote:
Hi all,
I've just stuck my proposal for a Catalog Service (which we've
been bandying about internally for a while, and which I've been
thinking about even longer) on the wiki:
http://projects.puppetlabs.com/projects/puppet/wiki/CatalogServiceArchitecture
Interesting read :-) here're a few notes:
I'll respond to the notes as necessary and update the document
(probably on this next flight) as appropriate, but separately.
I misread your notes at first - this first section is really about
being very clear as to the steps necessary to create this, right?
I.e., it's an explicit description of the work necessary to implement
the document's goals?
Yes. The document itself was lacking a bit structure in this respect and
I felt it needed a clear statement of the consequences.
* document needs list of proposed functional changes, afaict: *
insert RESTful API between puppetmaster and Catalog storage,
thereby exposing a proper interface * decouple compilation and
catalog serving completely * btw, using futures, one could compile
a "template" catalog and only insert the changing fact values
quickly?
This could be done, but I doubt that futures (which are a function in
the parser) would be the mechanism. I certainly wouldn't want to tie
this to futures, though.
An implementation detail. I was just brainstorming here.
* enrichen search API to cover all resources and complex queries *
implement additional backends * simple, no external dependencies *
massively scalable, using some nosql solution
* I'm wondering how the flat file based backend will perform in the
face of 100 systems. My intuition says that traditional SQL storage
will remain a viable (perfomance vs. configuration) solution in
this space.
I expect a file back end to perform poorly with 100 systems - I think
30 is a reasonable amount. I agree that SQL will continue to be
viable, and quite possibly a better long-term direction, at least for
the next few years.
* re directly exposing back-end interface: only an artifact of a
badly designed API. If this really becomes a problem, perhaps
building a more complex query, e.g. looking for multiple resource
types, might be viable to avoid strong coupling to the backend
* I'm reminded of a trick i used in the early days to emulate a
Catalog query in the main scope:
case $hostname { 'monitoring': { # apply monitoring stuff }
'webserver': { # install webserver } }
Today it looks like an awful hack, but the underlying principle
might prove interesting, even if only to strengthen the case of
Catalog storage by discarding it.
Yep, I did something very similar with Cfengine in 2003, and that
work is in large part what drove me to write exported resources into
Puppet. It works just as well in Puppet as it did in Cfengine,
though, and in some ways it's superior. Note that I would tend to
branch this by class membership rather than hostname.
In particular, it gives you the option of having an application-stack
view; i.e., you can effectively say that a host is both a member of a
given application stack and and also performs the database function,
and from there Puppet can use conditionals to figure out all of the
details. That's not always as visible using exported resources,
although of course there are other benefits.
To contrast this with a modern implementation:
class monitoring { Monitoring::Service<<||>> }
define monitoring::service::http() { @@monitoring::service {
"http_${fqdn}_${name}": command => "check_http", target =>
$fqdn, args => $port; } }
class webserver { monitoring::service::http { $port: } }
The main difference between the two solutions is the dataflow. In
the first solution, different resources are created from the same
configuration, depending on the environment. In the latter version,
compiling one manifest alters the environment for the other nodes.
Suddenly that sounds so wrong :) If all facts/nodes are available
on the server, shouldn't the puppetmaster be able to compile all
Catalogs in one step? Is the next manifest legal? Discuss!
Yes, it should. Well, one step might be a stretch, but yeah, it
should. I envision a catalog service dishing catalogs to clients,
and a pool of compiler processes that pull compile requests off of a
queue and compile as necessary. The compile requests can be created
by the client -- which would be a normal model -- or by the
dashboard, or as part of a commit hook in git, or whatever you want.
See further below for my vision. Beware though, that I'm having
delusions of grandeur lately ;-)
node A { y { 1: } } node B { x { 1: } }
define y() { $next = $name+1 @@x { $next: } Y<<||>> }
define x() { $next = $name+1 @@y { $next: } X<<||>> }
If I'm not completely off, this will create lots and lots of
resources as A and B are evaluated alternatively.
This might quite possible destroy the universe if resource collection
didn't ignore resources exported by the compiling host. Given that
they do, though, you'd likely just get flapping and some very pissed
coworkers.
I don't think so:
@@file{"/tmp/foo": ensure=>present; }
File<<||>>
will create a "/tmp/foo" on the applying host. But then again, I don't
know the internals of the code...
The last part might be a little bit off-topic, but I think it does
pertain to the whole "all-nodes-are-part-of-the-system" thinking
that is the motivation for Catalog storage/queries.
Yeah, that's a good point - one of the big goals here is to lose the
'nodes sit alone' perspective and really give them membership of part
of a larger whole.
Combining external node classification, fact storage and offline-compile
capability mentally really made that idea click for me. It leads to a
mental model which contains a single step from definition to Catalog for
the whole system as opposed to a Catalog for a single node.
The last missing piece would be a puppetrun orchestrator who could take
this System-wide Catalog, toposort it and run it on the nodes as
necessary. Does anyone else see the connection to parallelizing and
grouping resource application in puppetd?
Best Regards, David
--
dasz.at OG Tel: +43 (0)664 2602670 Web: http://dasz.at
Klosterneuburg UID: ATU64260999
FB-Nr.: FN 309285 g FB-Gericht: LG Korneuburg
--
You received this message because you are subscribed to the Google Groups "Puppet
Developers" group.
To post to this group, send email to puppet-...@googlegroups.com.
To unsubscribe from this group, send email to
puppet-dev+unsubscr...@googlegroups.com.
For more options, visit this group at
http://groups.google.com/group/puppet-dev?hl=en.