For files, its puppet or files only at this stage. (Though packages
can handle http so long as the package manager can understand a HTTP
file source...)

I started up some discussion on HTTP as a source for files as a means
to overcome this. (Linky:
http://groups.google.com.au/group/puppet-users/browse_thread/thread/b0d3004ad9daf40c/cf95773e76622eb5
)

Its apparently something that is in the works, and may well go some
way towards this under 0.25 when its available... I haven't really
looked at the 0.25 betas yet, so I'm not sure how its changed in this
regard...

For now, I'm making do with an NFS mount under autofs (shudder - Wish
I didn't have to do this), and will migrate all the large file serving
to run under http once the new release is available and tested... Not
ideal, but it means I can do:

 file { "/nfs/mount/foo.pkg": ensure => present }
 exec { "/nfs/mount/foo.sh":
    refreshonly => true,
    subscribe => File["/nfs/mount/foo.pkg"]
 }

Of course this means that foo.sh gets called each time foo.pkg changes
(ie. upgrades...)

Greg

On Jul 20, 10:32 am, "Fernando Padilla" <f...@alum.mit.edu> wrote:
> Thank you. I suppose that's an easy way around it.. I wonder if I want
> puppetmaster to also host a simple apache..
>
> Or.. does the "source" attribute support http/ftp over just file/puppet
> protocols?
>
> On Sat, 18 Jul 2009 13:13 +0200, "Sylvain Avril" <avr...@gmail.com>
> wrote:
>
>
>
> > I myself don't use puppet to pull big files.
> > Maybe you use puppet with the default Webrick HTTP frontend. You may
> > test another frontend like mongrel or passenger :
> >http://reductivelabs.com/trac/puppet/wiki/UsingMongrel
> >http://reductivelabs.com/trac/puppet/wiki/UsingPassenger
>
> > For my use, I use an HTTP server and a custom curl definition. But for
> > slow connections, it didn't resolve the timeout problem.
>
> > define common::archive::tar-gz($source, $target) {
> >   exec {"$name unpack":
> >     command => "curl ${source} | tar -xzf - -C ${target} && touch
> >     ${name}",
> >     creates => $name
> >   }
> > }
>
> > But the more elegant solution would be to package hadoop.
>
> > 2009/7/18 Fernando Padilla <f...@alum.mit.edu>
>
> > > Hi.  I'm a beginner, but I have a basic puppet setup working.  I am
> > > doing a manual tarball installation and it seems to be hanging then
> > > eventually timing out on just downloading the file:
>
> > >     file { "/opt/hadoop-0.20.0.tar.gz":
> > >        source => "puppet:///hadoop020/hadoop-0.20.0.tar.gz"
> > >     }
>
> > > I have another module that does the same things and works, my only guess
> > > is the size of the tarball:
>
> > > modules/hadoop020/files/hadoop-0.20.0.tar.gz - 41M
> > > modules/zookeeper320/files/zookeeper-3.2.0.tar.gz - 12M
>
> > > Any ideas or suggestions to speed up file transfers??
>
> > > If I manually scp the file, it takes only 30seconds (between office and
> > > ec2), why would it take so long and eventually timeout inside the colo (
> > > ec2)?
--~--~---------~--~----~------------~-------~--~----~
You received this message because you are subscribed to the Google Groups 
"Puppet Users" group.
To post to this group, send email to puppet-users@googlegroups.com
To unsubscribe from this group, send email to 
puppet-users+unsubscr...@googlegroups.com
For more options, visit this group at 
http://groups.google.com/group/puppet-users?hl=en
-~----------~----~----~----~------~----~------~--~---

Reply via email to