The very clear use case for me is in the puppet management of 
infrastructure.

Take the case of puppet being used to automate the creation of a database 
server. The Command to initiate the creation takes around 2 seconds to 
complete.  The process to actually create the database can take 5 minutes. 
 This has a problem when you want to create a suitable provider.

You can complete the creation immediately after the creation command 
completes.  nice and quick.  However you cannot have anything else in your 
puppet manifest that depends on this database server actually existing and 
being accessible.

Or you could block on the creation, polling every 5 seconds to confirm the 
current state of the creation task.  This is robust, and ensures that when 
the creation task completes, the database is available for other puppet 
actions.  This is great if you are only managing 1 database server.  What 
about if there are 5 or 6? the creation of these happens sequentially, 
creating a very long running script.

The best outcome would be to have a way to write  a provider to say that 
all instances of this type are independent, and we can run X in parallel.

Kind regards,

Michael

On Tuesday, February 18, 2014 at 5:02:11 PM UTC, Jon Forrest wrote:
>
> On Mon, Feb 17, 2014 at 6:20 AM, jcbollinger <john.bo...@stjude.org 
> <javascript:>> wrote: 
>
> > Well, I think the question that killed this thread the first time boils 
> down 
> > to "would it really?".  The speculation at the time was that parallel 
> > execution would produce disappointing wall-time gains, based on the 
> > assertion that the catalog application process is largely I/O bound. 
>  There 
> > were also some assertions that Ruby doesn't do shared-memory parallelism 
> > very well.  Nobody reported any actual analysis of any of that, though. 
>
> Right. Without such analysis it's hard to know if this idea is worth 
> following 
> up on today. 
>
> But, one thing to keep in mind is that systems are always changing. An I/O 
> bound 
> system of today might not be I/O bound tomorrow as technological 
> improvements 
> appear. Having a computer with available resources unable to apply to 
> resources 
> to a Puppet run (or anything else) is wasteful. In time, the lack of 
> client parallelization 
> could be a competitive weakness as Puppet competes in the marketplace. 
> (I don't know 
> what the status of client parallelization is in the competition right 
> now). 
>
> Jon Forrest 
>

-- 
You received this message because you are subscribed to the Google Groups 
"Puppet Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to puppet-users+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/puppet-users/951c8a89-662d-4da0-9154-44443c6ee4cf%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Reply via email to