On Thu, Dec 12, 2013 at 2:08 AM, Markus Burger <markus.bur...@uni-ak.ac.at>wrote:
> Hi, > > On 11-12-2013 10:51:08, Dan Bode wrote: > > Hi all, > > > > I had a bit of time to research the existing device code to see if I can > > use it for an integration with two specific use cases: > > > > 1. discovery/inventory - access hardware inventory and store it > somewhere > > where it can be retrieved. > > > > So far, device supports this use case. > > - specify a list of device endpoints in device.conf > > - run puppet device to get their facts to serve as inventory (although > > puppet device looks like it gets facts and requests catalogs, I will > > probably call the facts method directly to just get the facts) > > - have the front end query these facts from PuppetDB > > > > 2. management - manage the process of bringing up a cluster from scratch > > > > This is the use case where puppet device is problematic. > > > > In this use case, an external system needs to specify how a collection of > > resources should be configured. The types of these resources are > > heterogeneous, for example: > > > > - Server > > - Storage > > - Network > > - add Port > > - create server > > > > These hardware configuration rules (and their dependencies) map pretty > > cleanly to the Puppet DSL and the Resource/Graph model. Where a manifests > > represents multiple devices and multiple endpoints. > > > > I had the following issues with puppet device for this use case: > > > > 1. It iterates through the endpoints and configures them one at a time > > > > This is probably the biggest barrier. I need to keep track of a > collection > > of resources that target multiple endpoints and apply them in a certain > > order. Looking at the device code it seems to just iterate through the > > endpoints in device.conf and configure them one at a time. > > I currently use a simple solution to work around this problem where > i create the device.conf through an external process on the fly and > specify my > devices and there dependencys in a yaml file, run them in order and just > check the exit code. > > it looks something like this: > > --- > defaults: > scheme: sshios > port: 22 > userinfo: foo:bar > query: crypt=true > cmd: /usr/bin/puppet device --verbose --environment=network > --detailed-exit-codes --deviceconfig={{DEVCFG}} || [ $? -eq 2 ] > > devices: > dc1: > sw-dc1-01.foo.bar: > deps: > - * > sw-dc1-02.foo.bar: > sw-dc1-03.foo.bar: > deps: > - sw-dc1-02.foo.bar > str-dc1-01.foo.bar: > scheme: netapp > deps: > - sw-dc1-01.foo.bar > > Just to clarify, this is letting you specify the order in which resources are configured on your devices? This looks like it only allows you to specify order between types of things (and not between resources). It also looks like you are still grouping resources based on how a certain maps to a device? (so in this example, if you had a workflow that needed to configure 10 resources against 10 endpoints, this would involve updating the 10 node definitions in your site manifest?) > > > > > I spent some time thinking about the current device command and how I > might > > use it to configure workflows across multiple endpoints. > > - on the puppet master, keep a queue (or list) for each endpoint that > needs > > to be configured > > - have an external process (the dispatcher) that keeps track of the > > configuration that needs to be applied (along with their endpoints) and > > stores the resources that represent that configuration into the correct > > queue for it's endpoint. > > - have an ENC that checks the certname of a device when it checks in, > maps > > it to a queue, and clears all entries for a queue (for it to apply) > > - If the dispatcher keeps track of all of the resources that it put onto > > which queue, it can track the report for those devices to know when it's > > entire job is completed. > > > > The above explanation is the best way I could think of to use the > existing > > device, but it is cumbersome enough that it warrants not using the device > > model. > > > > 2. it does not allow for the specification of dependencies between > multiple > > device endpoints. It only allows for certain endpoints to be processed > in a > > certain order. > > > > This is pretty much the same as #1, but worth mentioning separately. > > > > 3. It invents its own command line for doing things (it does not cleanly > > operate with puppet resource, puppet apply, puppet agent with represents > a > > major loss of functionality) > > > > 4. Management of device.conf > > > > The existence of device.conf creates its own management issues. You need > to > > assign a single node to a single device, you have to manage the process > for > > getting the credentials to that device, you have to figure out how many > > devices/which devices go to which nodes as you scale out to a large > number > > of device endpoints. > > > > *Solution:* > > > > The transport model (as created by Nan Liu) seems to get around the > issues > > mentioned above and would allow a pretty clean integration path. > > > > For folks not familiar with the transport model. It uses regular types > and > > providers that accept a parameter called transport that can be used to > > indicate that it should be applied against some remote endpoint. > > > > For example: > > > > Transport { 'ssh': > > url => some_url > > password => 'some_password' > > } > > > > port { > > transport => Transport[ssh] > > } > > > > This will work perfectly for my use case. > > Can you point me to a thread where this was discussed ? > Maybe this has never been discussed in public. I happen to have worked next to Nan for a while. > I can only see an advantage of the purposed model for certain > situations / device types but not for the traditional use case. > What is the traditional use case? Management of individual devices as opposed to workflow? Is that what people want? Is it how they manage devices? I'm not even advocating that it should be the preferred model. I'm just outlining my use case, how the current model does not support it, mentioning another model that works for my use case, and pointing out that model is incompatible with the current model. The ideal outcome from my perspective would be if the device model supported both use cases (both passing in transport as a parameter vs. relying on a local configuration file) That being said, I think it does have a few advantages: - one puppet certificate can be used to manage multiple devices (don't have to deal with cert management for every device) - transport can be serialized in through Puppet (no need for a separate process to manage how the device.conf is created) - allows for the creation of workflows that use multiple device endpoints > > Thanks, > Markus > > -- > You received this message because you are subscribed to the Google Groups > "Puppet Users" group. > To unsubscribe from this group and stop receiving emails from it, send an > email to puppet-users+unsubscr...@googlegroups.com. > To view this discussion on the web visit > https://groups.google.com/d/msgid/puppet-users/20131212080840.GA30091%40nox-arch.uni-ak.ac.at > . > For more options, visit https://groups.google.com/groups/opt_out. > -- You received this message because you are subscribed to the Google Groups "Puppet Users" group. To unsubscribe from this group and stop receiving emails from it, send an email to puppet-users+unsubscr...@googlegroups.com. To view this discussion on the web visit https://groups.google.com/d/msgid/puppet-users/CA%2B0t2LyMNFrdQEgSxRvzf4_eYF7LBc%2Bc1Ya9NFYUpbqQHjpTJw%40mail.gmail.com. For more options, visit https://groups.google.com/groups/opt_out.