I use it via cron in general with a heavy splay and haven't had any
issues so far.

Of course, schedules are out the window, but I do schedule stuff via cron...

Trevor

On Tue, Jul 21, 2009 at 09:12, Matt<mattmora...@gmail.com> wrote:
>
> I see the same issue when changing the runinterval option.  I
> basically wanted the polling time to change after a successful poll
> (after ca clean).  I see the "reparsing puppet.conf" in the log file,
> but the daemon continues to poll at the original interval.
>
> This is on 0.24.8 on CentOS 5.1, I'm in the process of moving to cron
> and running no-daemonize to get around the puppet.conf issue.
>
>
> 2009/7/15 Teyo Tyree <t...@reductivelabs.com>:
>>
>>
>> On Sat, Jul 11, 2009 at 1:18 AM, Greg <greg.b...@gmail.com> wrote:
>>>
>>> Paul,
>>>
>>> I've seen similar behaviour, but it shows up for me with the list of
>>> classes. I have a staging server for testing rolling out new puppet
>>> configs. Upon getting the new config, puppet seems to use the same
>>> server until restarting. I don't have a solution yet, but heres what I
>>> know to add to the conversation.
>>>
>>> I tried using:
>>>
>>>  service { "puppetd":
>>>    ensure => running,
>>>    subscribe => File["/etc/puppet/puppet.conf"]
>>>  }
>>>
>>> And that worked... For a while... This has 2 interesting side effects
>>> for me (on Solaris, at least)
>>>
>>> 1. It would stop things mid-run. As soon as a puppet.conf was updated
>>> it would restart. Mostly that is OK, but if you have schedules,
>>> sometimes they get triggered without actually doing any work because
>>> Puppet is shutting down. I suspect this is because it checks an item,
>>> then receives the shutdown signal and doesn't get to finish the job
>>> its doing.
>>>
>>> 2. *Sometimes* puppet would not shut down correctly. Would get the
>>> signal, start to shut down then hang. If I ever figure out why or how
>>> its doing this I will submit a bug report. This happens for us only
>>> occasionally, and usually SMF kicks in and puts it into maintenance
>>> state at which point it kills with a -9 and then waits for someone to
>>> svcadm clear it.
>>>
>>> For us, this started happening long after we upgraded from 0.24.7 to
>>> 0.24.8... We also run our staging server on a different port to the
>>> production Puppet server to make sure that it doesn't accidentally get
>>> used.
>>>
>>> The only thing I can think of is that maybe the server name gets
>>> cached somewhere else other than config - and maybe it isn't being
>>> cleaned out when the config is being re-read... I can understand there
>>> being a server connection cached for the run, but once its finished it
>>> should in theory be cleared out...
>>>
>>> Greg
>>>
>>> On Jul 11, 9:31 am, Paul Lathrop <p...@tertiusfamily.net> wrote:
>>> > Dear Puppeteers,
>>> >
>>> > I'm in desperate need of help. Here's the story:
>>> >
>>> > When I boot up new machines, they have a default puppet.conf which
>>> > causes them to talk to our production puppetmaster at
>>> > puppet.digg.internal. Some of these machines are destined for our
>>> > development environment, and there is a custom fact 'digg_environment'
>>> > that the default config uses to pass out an updated puppet.conf file.
>>> > For these development machines, this file points server= to
>>> > puppet.dev.digg.internal, which has a node block for the machine that
>>> > then has their full configuration.
>>> >
>>> > This all seemed to work great until recently, and I'm not sure what
>>> > changed.
>>> >
>>> > Now, what happens is that the machine boots with the default
>>> > puppet.conf. It talks to the production puppetmaster, and downloads
>>> > the correct puppet.conf which points server= to
>>> > puppet.dev.digg.internal. In the logs, I see the "Reparsing
>>> > /etc/puppet/puppet.conf" message. The report ends up getting sent to
>>> > the development puppetmaster (puppet.dev.digg.internal). However, on
>>> > subsequent runs, puppetd continues to talk to the production
>>> > puppetmaster instead of getting it's config from the development
>>> > puppetmaster! After a manual restart of the daemon, it works as
>>> > expected. However, manual steps are a big bummer!
>>> >
>>> > The only change I can think of here is that we switched to Debian
>>> > Lenny. Puppet version is 0.24.8. Any help would be appreciated!
>>> >
>>> > Thanks,
>>> > Paul
>>>
>> The bad news:
>> We need to track down why exactly the server parameter is getting cached.
>>  Additionally, puppet should not restart in the middle of a transaction
>> (There is a ticket for 0.25 to make this behavior optional, but currently it
>> should restart post transaction.  Both of these are bugs and should be
>> reported as such.
>> The good news:
>> Paul, one work around for your issue is to do something completely different
>> at provisioning time.  What I do is use a very simple init script to
>> bootstrap puppetd.  Instead of using puppetd to bootstrap itself, just use
>> the puppet executable and a simple bootstrap module in your init script.
>>  The bootstrap manifest should use the services resource type to
>> start/restart puppetd and to disable the bootstrap init script and a file
>> resource to manage puppet.conf.  This approach won't address any changes to
>> puppet.conf after provisioning, but should address your specific issue at
>> provisioning time.
>> -teyo
>>
>> --
>> Teyo Tyree :: www.reductivelabs.com :: +1.615.275.5066
>>
>> >
>>
>
> >
>

--~--~---------~--~----~------------~-------~--~----~
You received this message because you are subscribed to the Google Groups 
"Puppet Users" group.
To post to this group, send email to puppet-users@googlegroups.com
To unsubscribe from this group, send email to 
puppet-users+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/puppet-users?hl=en
-~----------~----~----~----~------~----~------~--~---

Reply via email to