Re: [Puppet Users] Puppet ordering: ensure exported resource to beevaluated BEFORE classes

2018-01-09 Thread desertkun
Thank you for explanatory response.

I ended up using puppetdbquery 
 instead of 
exported "resources", so I managed to avoid this ordering mess completely.

On Tuesday, January 9, 2018 at 3:05:33 AM UTC+2, Matthew Kennedy wrote:
>
>  
>
> As far as I understand this is not really possible.  This is because you 
> don’t have control over the order of compilation of resources into the 
> catalog. The compiler first evaluates all classes then moves on and 
> eventually has a step that evaluates all collections and so on until the 
> catalog is complete.
>
> You can work around this by ‘bumping scope’ which means you need to get the 
> evaluation of the getparam() to happen after the collection has occurred and 
> the resources are in the catalog.  This is essentially what is happening in 
> your define example.
>
> By bumping scope I mean that you need to do any evaluations in a define that 
> is ‘one scope’ more then then where then resources will be added to the 
> catalog. My scope I mean the order that the compiler will evaluate things.
>
> So when you have a collection in a class the actual collection will happen 
> after all classes are evaluated by the initial pass by the compiler hence why 
> you see the behavior you do.  So if you create a define, use it in the class 
> and have that define evaluate the getparam() then it /might/ work.  Of course 
> you won’t be able to get to those values in your class (because its already 
> been evaluated). I say /might/ work because you can’t be sure that the define 
> you create and the defined type you collected will happen in the right order 
> ie that the collection will happen first. To be sure you can create a second 
> define and use that in the first define and move the evaluation to the new 
> second define.  This is why I call this ‘bumping scope’. At this point the 
> evaluation will happen after the collection’s defined types are in the 
> catalog and getparam() will work.  This get fun when defines make use of 
> defines and the other defines need to reference these 2nd level (or 3rd 
> level) defines.  You can keep adding defines to get things to work. 
>
> Now this looks ugly and is and I feel bad that I’ve done this but it works 
> well and has predictable results. 
>
> Finally the only way I think this can be fixed is it puppet went to a 
> multipass compiler that would ‘reevaluate’ resources if evaluations 
> occurred that reference those resources or their parameters.  This would 
> slow down performance quite and bit and have almost no benefit for most 
> people.
>
> *From: *desertkun 
> *Sent: *Monday, January 8, 2018 9:10 AM
> *To: *Puppet Users 
> *Subject: *[Puppet Users] Puppet ordering: ensure exported resource to 
> beevaluated BEFORE classes
>
>  
>
> I would like to export "location" information from one node to another. 
> And stuck with that problem for weeks now.
>
>  
>
> In order to achieve that, I use exported resources. Here's the simplified 
> idea:
>
>  
>
> *define *location (*$host*) {
>
>   notify {*"Location being defined **${title}** -> **${host}**"*: }
>
> }
>
>  
>
> # export the location
>
> *node **'a' *{
>
>   @@location { *"mysql-server"*:
>
> host => *$hostname*
>
>   }
>
> }
>
>  
>
> *node **'b' *{
>
>   # later import it on another node
>
>   Location <<| title == *"mysql-server" *|>>
>
>   # extract the data
>
> *  $mysql_server_host *= getparam(Location[*"mysql-server"*], *"host"*)
>
> }
>
>  
>
> Pretty much I use exported resources as some sort of exported "facts". The 
> problem with that, that puppet cannot realize the Location inside the 
> classes or at node level scope, but does it just fine within resources:
>
>  
>
> *DOES NOT WORK:*
>
>  
>
> *class *class_example() {
>
>   *# something like realize Location["mysql-server"] doesn't even compile*
>
>   Location <<| title == *"mysql-server" *|>>
>
>   *$mysql_server_host *= getparam(Location[*"mysql-server"*], *"host"*)
>
>   notify {*"Example in class: **${mysql_server_host}**"*: }
>
> }
>
>  
>
> *node **'b' *{
>
>   *class *{ class_example: }
>
> }
>
>  
>
> Yields: "Notice: Example in class:"
>
>  
>
> *WORKS JUST FINE:*
>
>  
>
> *define *resource_example() {
>
>   Location <<| title == *"mysql-server" *|>>
>
>   *$mysql_server_host *= getparam(Location[*"mysql-server"*], *"host"*)
>
>   notify {*"Example in resource: **${mysql_server_host}**"*: }
>
> }
>
>  
>
> Yields "Notice: Example in resource: hostname-x"
>
>  
>
> The problem with that, I my guess, the puppet evaluates stuff in two 
> stages, classes first, and then resources, including exported resources. 
> What is also interesting, that notify I've put for debug yields like that:
>
>  
>
> Example in class:
>
> Example in resource: hostname-x
>
> Location being defined mysql-server -> hostname-x
>
>  
>
> If I do "require => Location[*"mysql-server"*]" for the resource, the 
> ordering shows right, but result is the sam

[Puppet Users] downgrade two packages at once with 'packages'

2018-01-09 Thread mvogt1


Hello list,

I try to downgrade lightdm with 'yum' on my RHEL 7, the obvious approach 
does not work:


package { [ 'lightdm','lightdm-gobject' ]:
ensure => '1.18.3-1.el7.2',
}

>Execution of '/bin/yum -d 0 -e 0 -y downgrade lightdm-1.18.3-1.el7.2' 
returned 1: Error:
>Package: lightdm-1.18.3-1.el7.2.x86_64 (el7-backports-74) Requires: 
lightdm-gobject(x86-64) = 1.18.3-1.el7.2
>[...]
>Execution of '/bin/yum -d 0 -e 0 -y downgrade 
lightdm-gobject-1.18.3-1.el7.2' returned 1: Error: Package: 
lightdm-1.25.0-1.el7.x86_64 (@epel)
>Requires: lightdm-gobject(x86-64) = 1.25.0-1.el7

lightdm ist downgraded, but fails, because lightdm-gobject must be 
downgraded simultaneously
lightdm-gobject is downgraded, but fails, because lightdm must be 
downgraded simultaneously

On the command line:

>yum downgrade lightdm-1.18.3-1.el7.2 lightdm-gobject-1.18.3-1.el7.2
>
>Package Arch Version Repository Size
>
>Downgrading:
>lightdm x86_64 1.18.3-1.el7.2 
>lightdm-gobject x86_64 1.18.3-1.el7.2 
>
>Transaction Summary
>==
>Downgrade 2 Packages

==> works!

Why it is not possible to downgrade both rpm packages at the same time, 
from puppet?

Maybe with a package { install_options ==> 'at_once'} , or something?


regards,

Martin

-- 
You received this message because you are subscribed to the Google Groups 
"Puppet Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to puppet-users+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/puppet-users/3352dc22-0d93-438f-b011-8ba318cdf8a1%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [Puppet Users] Re: Work-flow for Control-repo in Git

2018-01-09 Thread Lupin Deterd
I was hoping someone from Puppetlabs with their wealth of customer
experience will chime in. I'm surprised that the community doesn't have an
endorse pattern for this, like what we have with "Roles and Profile".

On Thu, Dec 28, 2017 at 4:37 AM, Barney Garrett 
wrote:

> I've too have been looking at workflow as part of a move to puppet 4 and
> I'm confused to hell and back ;) .
>
> Currently we have a single CA Master and a number of production
> compilation masters and a production puppetdb, there are also a number of
> test compilation masters and a test puppetdb.  We run with essentially 2
> branches, one for production and one for test each being deployed *without
> *r10k to their respective compilation servers.  The changes in the
> production and test branches are synchronized using meldmerge.
>
> Personally I'd like to get away from this model and actually introduce a
> more rigorous peer review process, our software development teams use
> Gerrit and our puppet code is actually in this server we just don't
> currently use the features.  The things I've been reading suggest that
> having long running branches for what people seem refer to as application
> tier (dev, prod, test etc) is bad practice and you should have a single
> main branch with possibly a 2nd one for integration testing.  Work is done
> in short lived branches that are merged into the main line when the work is
> complete.  Membership of dev, prod, test etc is determined by facts on the
> node.
>
> There are still some issues I don't really understand though.  It would
> appear that puppetdb doesn't support environments all that well either so
> it would follow that assuming we want to maintain a production and test
> estate for the servers we are maintaining then from a puppet infrastructure
> perspective you'd still have a Puppet CA, a number of production
> compilation servers with a puppetdb, and a number of test compilation
> servers with its own puppetdb.  Either of these could have multiple
> temporary environments deployed on them (hotfix branches on the prod
> compilation servers and feature branches on the test compilation servers)
> but ultimately each main application tier (prod or test) is actually
> deployed from the same branch but from different points in time.
> Unfortunately it would appear r10k doesn't support looking at a specific
> tag of the control repo so not sure how you'd work round that either.
>
> Am I just being naive and foolish or is this kind of workflow possible or
> even desirable ?
>
> --
> You received this message because you are subscribed to the Google Groups
> "Puppet Users" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to puppet-users+unsubscr...@googlegroups.com.
> To view this discussion on the web visit https://groups.google.com/d/
> msgid/puppet-users/b0ceac18-cc41-49b8-829a-c7a581f44ac6%40googlegroups.com
> 
> .
>
> For more options, visit https://groups.google.com/d/optout.
>

-- 
You received this message because you are subscribed to the Google Groups 
"Puppet Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to puppet-users+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/puppet-users/CAL0mBkQpWLYm41dhFzjrFzyp-swWktn-7wH4ipOdHmmhjF-UTA%40mail.gmail.com.
For more options, visit https://groups.google.com/d/optout.


Re: [Puppet Users] puppet5 upgrade performance issues

2018-01-09 Thread Matthaus Owens
Chris,
To better help you, it would be great to know a few more things about
your installation. First question: are you running puppetserver 5.0.0
or something later in the 5.x series (and is it the same on all
servers)? Second, what version of the puppet-agent are on those
servers? puppetserver 5.1.3 included a fix for
https://tickets.puppetlabs.com/browse/SERVER-1922 which should improve
performance some.
To dig into what may be causing the compiles to be slower, I would
recommend first checking out the client metrics.
https://puppet.com/docs/puppetserver/5.1/http_client_metrics.html has
some details, and I would be interested in the client metrics that
page lists under the /puppet/v3/catalog. They are PuppetDB related
requests, and as that was also upgraded alongside puppetserver it
would be good to eliminate PuppetDB as a contributor. PuppetDB
slowness can show up as slow catalog compiles, which in turn will hold
jrubies for longer and might explain some of what you are seeing.

On Mon, Jan 8, 2018 at 7:31 PM, chris smith  wrote:
> Hi there,
>
> I recently did an upgrade from puppetserver 2.7.2 to puppetserver 5.0 and
> performance has bottomed out pretty terribly. Agents and puppetdb also got
> updated. Compiling the catalog on the server used to take 10-20 seconds, now
> they are taking 90-150 seconds and agent runs are taking 30+ minutes (used
> to be a couple of minutes).
>
> The architecture is distributed, with:
>  * a central ca, running puppetserver, puppetdb, postgres 9.6
>  * separate puppetservers replicated in other datacentres. These are also
> running puppetdb, pointing writes to the central ca, but reads go to a
> locally replicated database
>
> Other servers (agents) point to the replicated puppetservers to do all of
> the work.
>
> The puppetservers were tuned (upped jvm memory, set max-instances).
>
> The architecture hasn't changed since the upgrade.
>
> The puppetservers are still running hiera3 configs, they haven't been
> converted yet (it's on the list, but from what I read it wasn't a
> showstopper). We have a reasonable amount of encrypted yaml files (using
> hiera-eyaml-gpg), though this was the same as pre-upgrade and hasn't changed
> significantly.
>
> Since the upgrade, I've tried re-tuning the jvm settings, changing
> max-instances and not having much luck at all. I found the experimental
> dashboard on the puppetservers and they show that there are no free jruby's,
> but there has to be something I'm missing that's causing that to happen.
>
> I'm lost on what to look at next, is there an easy way to peak inside jruby
> to see what's taking so long?
>
> Any suggestions would be great.
>
> Cheers,
> Chris.
>
> --
> You received this message because you are subscribed to the Google Groups
> "Puppet Users" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to puppet-users+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/puppet-users/4d0dc37f-c07e-4f8c-8323-44a90d68b208%40googlegroups.com.
> For more options, visit https://groups.google.com/d/optout.

-- 
You received this message because you are subscribed to the Google Groups 
"Puppet Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to puppet-users+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/puppet-users/CACD%3DwAf5eC%2B8mixcyYjUGAm-0jeBzYnq%2BOydg3ZNkJ5YWoROmg%40mail.gmail.com.
For more options, visit https://groups.google.com/d/optout.


Re: [Puppet Users] puppet5 upgrade performance issues

2018-01-09 Thread Matthaus Owens
Chris,
Hiera 3 + hiera-eyaml may also be contributing to the slowness. Here
is one ticket (related to SERVER-1922) that indicated moving to hiera
5 improved compile times substantially:
https://tickets.puppetlabs.com/browse/SERVER-1919

-Haus

On Tue, Jan 9, 2018 at 11:36 AM, Matthaus Owens  wrote:
> Chris,
> To better help you, it would be great to know a few more things about
> your installation. First question: are you running puppetserver 5.0.0
> or something later in the 5.x series (and is it the same on all
> servers)? Second, what version of the puppet-agent are on those
> servers? puppetserver 5.1.3 included a fix for
> https://tickets.puppetlabs.com/browse/SERVER-1922 which should improve
> performance some.
> To dig into what may be causing the compiles to be slower, I would
> recommend first checking out the client metrics.
> https://puppet.com/docs/puppetserver/5.1/http_client_metrics.html has
> some details, and I would be interested in the client metrics that
> page lists under the /puppet/v3/catalog. They are PuppetDB related
> requests, and as that was also upgraded alongside puppetserver it
> would be good to eliminate PuppetDB as a contributor. PuppetDB
> slowness can show up as slow catalog compiles, which in turn will hold
> jrubies for longer and might explain some of what you are seeing.
>
> On Mon, Jan 8, 2018 at 7:31 PM, chris smith  wrote:
>> Hi there,
>>
>> I recently did an upgrade from puppetserver 2.7.2 to puppetserver 5.0 and
>> performance has bottomed out pretty terribly. Agents and puppetdb also got
>> updated. Compiling the catalog on the server used to take 10-20 seconds, now
>> they are taking 90-150 seconds and agent runs are taking 30+ minutes (used
>> to be a couple of minutes).
>>
>> The architecture is distributed, with:
>>  * a central ca, running puppetserver, puppetdb, postgres 9.6
>>  * separate puppetservers replicated in other datacentres. These are also
>> running puppetdb, pointing writes to the central ca, but reads go to a
>> locally replicated database
>>
>> Other servers (agents) point to the replicated puppetservers to do all of
>> the work.
>>
>> The puppetservers were tuned (upped jvm memory, set max-instances).
>>
>> The architecture hasn't changed since the upgrade.
>>
>> The puppetservers are still running hiera3 configs, they haven't been
>> converted yet (it's on the list, but from what I read it wasn't a
>> showstopper). We have a reasonable amount of encrypted yaml files (using
>> hiera-eyaml-gpg), though this was the same as pre-upgrade and hasn't changed
>> significantly.
>>
>> Since the upgrade, I've tried re-tuning the jvm settings, changing
>> max-instances and not having much luck at all. I found the experimental
>> dashboard on the puppetservers and they show that there are no free jruby's,
>> but there has to be something I'm missing that's causing that to happen.
>>
>> I'm lost on what to look at next, is there an easy way to peak inside jruby
>> to see what's taking so long?
>>
>> Any suggestions would be great.
>>
>> Cheers,
>> Chris.
>>
>> --
>> You received this message because you are subscribed to the Google Groups
>> "Puppet Users" group.
>> To unsubscribe from this group and stop receiving emails from it, send an
>> email to puppet-users+unsubscr...@googlegroups.com.
>> To view this discussion on the web visit
>> https://groups.google.com/d/msgid/puppet-users/4d0dc37f-c07e-4f8c-8323-44a90d68b208%40googlegroups.com.
>> For more options, visit https://groups.google.com/d/optout.

-- 
You received this message because you are subscribed to the Google Groups 
"Puppet Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to puppet-users+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/puppet-users/CACD%3DwAfJveNpAzekCvjeP8aZ7ma4mD4%2B1KC9JKE4mqELbktB4g%40mail.gmail.com.
For more options, visit https://groups.google.com/d/optout.


Re: [Puppet Users] puppet5 upgrade performance issues

2018-01-09 Thread Chris Smith
Hi,

Thanks for your help.

On 10/01/18 06:36, Matthaus Owens wrote:
> Chris,
> To better help you, it would be great to know a few more things about
> your installation. First question: are you running puppetserver 5.0.0
> or something later in the 5.x series (and is it the same on all
> servers)? Second, what version of the puppet-agent are on those
> servers? puppetserver 5.1.3 included a fix for
> https://tickets.puppetlabs.com/browse/SERVER-1922 which should improve
> performance some.

Hm. Interesting, thanks. I'll check out what a 5.0 -> 5.1 upgrade will do.

> 
> Hiera 3 + hiera-eyaml may also be contributing to the slowness. Here
> is one ticket (related to SERVER-1922) that indicated moving to hiera
> 5 improved compile times substantially:
> https://tickets.puppetlabs.com/browse/SERVER-1919

Also interesting but as noted on the last comment, a lot of the
structure was changed so that might not all have been hiera3 -> hiera5.

> To dig into what may be causing the compiles to be slower, I would
> recommend first checking out the client metrics.
> https://puppet.com/docs/puppetserver/5.1/http_client_metrics.html has
> some details, and I would be interested in the client metrics that
> page lists under the /puppet/v3/catalog. They are PuppetDB related
> requests, and as that was also upgraded alongside puppetserver it
> would be good to eliminate PuppetDB as a contributor. PuppetDB
> slowness can show up as slow catalog compiles, which in turn will hold
> jrubies for longer and might explain some of what you are seeing.

puppetservers are all the same.

We upgraded to:
# /opt/puppetlabs/server/bin/puppetserver -v
puppetserver version: 5.0.0

puppetdb is this, it should have been 5.0 as well but I stuffed it up.
# /opt/puppetlabs/server/bin/puppetdb -v
puppetdb version: 5.1.3


agents are all:
# /opt/puppetlabs/puppet/bin/puppet --version
5.0.0


The metrics say

{
  "route-id": "puppet-v3-file_metadata-/*/",
  "count": 9373,
  "mean": 10217,
  "aggregate": 95763941
},
{
  "route-id": "puppet-v3-catalog-/*/",
  "count": 828,
  "mean": 94773,
  "aggregate": 78472044
},
{
  "route-id": "puppet-v3-node-/*/",
  "count": 831,
  "mean": 62709,
  "aggregate": 5279
},
{
  "route-id": "puppet-v3-file_metadatas-/*/",
  "count": 4714,
  "mean": 9288,
  "aggregate": 43783632
},
{
  "route-id": "puppet-v3-report-/*/",
  "count": 780,
  "mean": 3433,
  "aggregate": 2677740
},



  "http-client-metrics": [
{
  "count": 821,
  "mean": 48,
  "aggregate": 39408,
  "metric-name":
"puppetlabs.localhost.http-client.experimental.with-metric-id.puppetdb.command.replace_catalog.full-response",
  "metric-id": [
"puppetdb",
"command",
"replace_catalog"
  ]
},
{
  "count": 832,
  "mean": 25,
  "aggregate": 20800,
  "metric-name":
"puppetlabs.localhost.http-client.experimental.with-metric-id.puppetdb.command.replace_facts.full-response",
  "metric-id": [
"puppetdb",
"command",
"replace_facts"
  ]
},
{
  "count": 780,
  "mean": 19,
  "aggregate": 14820,
  "metric-name":
"puppetlabs.localhost.http-client.experimental.with-metric-id.puppetdb.command.store_report.full-response",
  "metric-id": [
"puppetdb",
"command",
"store_report"
  ]
},
{
  "count": 215,
  "mean": 43,
  "aggregate": 9245,
  "metric-name":
"puppetlabs.localhost.http-client.experimental.with-metric-id.puppetdb.facts.find.full-response",
  "metric-id": [
"puppetdb",
"facts",
"find"
  ]
}
  ]


So I think that's showing it's quick to pass it off to puppetdb when
it's storing changes.

puppetdb logs are telling me that 'replace catalog' is taking 2-3
seconds, and 'replace facts' is taking 10-20 seconds (previous puppetdb
wasn't logging the time taken, so I can't compare).

I tried changing puppetdb logging to debug but it doesn't tell me what
it's doing with those 'replace' commands (I don't think, I might've
missed it). I haven't found a way to manually process one of those
files, do you know if there is a way to do it?

I've set up postgres logging to alert for queries over 200ms (both on
the primary & replica) and I get very little (a couple of queries every
now and then), so I don't think it's the database.


Cheers,
Chris.

-- 
You received this message because you are subscribed to the Google Groups 
"Puppet Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to puppet-users+un