Sorry, to be clear it uses post-build targets to create the archive.

On Mar 3, 2011, at 6:41 AM, Scott O'Bryan <darkar...@gmail.com> wrote:

> Well, mine isn't completely done yet (the tar.gz is not correct and
> some other things) but the myfaces-trinidad-site job has a working
> principal for downloading as a cron job.  It pulls the sites source
> into a staging directory and packages it up during the build.  It then
> stores the tar.gz as an artifact so it's always available and only
> updated on a successful build.
>
> I hope to have it done and generating a valid archive soon, but feel
> free to take a look at it if you like.  I figured a gzipped archive
> was convenient enough to wget.
>
> Scott
>
> On Mar 3, 2011, at 6:34 AM, Jukka Zitting <jukka.zitt...@gmail.com> wrote:
>
>> Hi,
>>
>> On Thu, Mar 3, 2011 at 2:30 PM, Ulrich Stärk <u...@apache.org> wrote:
>>> The only thing that seems reasonable apart from having an rsync daemon is 
>>> doing a recursive wget on
>>> the generated site every hour or so. This will however fetch all 4000 files 
>>> and I don't know how
>>> much load it puts on Jenkins. Is this OK?
>>
>> Doesn't sound like a very good approach.
>>
>> The Apache CMS setup running on Buildbot simply commits the generated
>> site to svn from where it's picked up by the public web servers. I
>> guess a similar setup could be done also with Maven site builds
>> running in Hudson.
>>
>> BR,
>>
>> Jukka Zitting

Reply via email to