Patrick <kc7...@gmail.com> writes: > On Sep 7, 2010, at 1:22 AM, Martijn Grendelman wrote: > >>>> But for this to work, the Puppet run needs to run aptitude update to pick >>>> up the new package name. Running the update periodically isn't enough, >>>> but running an update on every catalog run is just overkill. >>> >>> I understand your concern here, but have you done the timing tests? >>> >>> How long are your average puppet runs and how much time does an >>> "apt-get update" take? >>> >>> I just wonder if you're prematurely optimizing your runs. >> >> It's not actually about optimizing puppet runs, it's about the load imposed >> on the package repository servers, when hundreds of servers do updates >> twice an hour. > > I use the apt-cacher-ng proxy to reduce load on the package servers and for > speed. It means that I can download packages over a 100Mbps connection.
So do we; this is nothing but polite behaviour when you have a large pool of machines hitting the same resources time and again. A word of warning from painful discoveries of the past: while apt can be a bit of a mirror and link load if you make it update, yum is a huge risk: it will do a mirror update every time you invoke it that it thinks the lists are out of date. On one deployment we had that running every hour and yum updated every hour, generating gigabytes of traffic until we put a proxy in place between it and the outside world. You can tune these things, but out of the box you run a huge risk if you don't funnel your updates through some sort of sensible proxy server... Daniel -- ✣ Daniel Pittman ✉ dan...@rimspace.net ☎ +61 401 155 707 ♽ made with 100 percent post-consumer electrons -- You received this message because you are subscribed to the Google Groups "Puppet Users" group. To post to this group, send email to puppet-us...@googlegroups.com. To unsubscribe from this group, send email to puppet-users+unsubscr...@googlegroups.com. For more options, visit this group at http://groups.google.com/group/puppet-users?hl=en.