I found that using that standard method is quite slow compared to scp.   So 
I use that method to copy just a few small files, and one with GUIDs for 
fingerprinting, and for the big ones I do something like

scp -v ${WORKSPACE}/bigfile.tar.gz 
user@jenkins_host_name:path_to_jenkins_root/jobs/${JOB_NAME}/builds/${BUILD_ID}/archive/
 
2>&1 | tail -n 5

I think there's a ${JENKINS_HOME} or something for the path on the 
master.   That copies a 2-3 GB file in roughly 40 seconds instead of 
something like 4 minutes.  There was a fix put in recently for I think some 
Maven plugin where when copying files to the master, the master would poll 
the slave to send over the next packet with too many requests, and fixing 
that sped things up a ton, perhaps there's another fix coming for how other 
files are transferred.

Since "big" can sometimes be > 8GB, it would choke the normal archiver 
which uses tar under the covers, or at least it did.  In any case this is 
much faster, since pigz is multicore aware:

tar cf ${WORKSPACE}/bigfile.tar.gz --use-compress-program=pigz [files to 
pack]

YMMV

--- Matt

On Monday, April 27, 2015 at 1:27:43 AM UTC-7, matthew...@diamond.ac.uk 
wrote:
>
> Are you using "Archive Artifacts" in the upstream job, and the "Copy 
> Artifact" plugin in the downstream job? This is the standard method. 
> If so, maybe the upstream job should produce a single zip file , which the 
> downstream job and get and unzip. 
> Matthew 
>
> > -----Original Message----- 
> > From: jenkins...@googlegroups.com <javascript:> [mailto:
> jenkins...@googlegroups.com <javascript:>] On Behalf Of Simon Richter 
> > Sent: 25 April 2015 01:03 
> > To: jenkins...@googlegroups.com <javascript:> 
> > Subject: Efficiently copying artifacts 
> > 
> > Hi, 
> > 
> > I have a project that outputs a few large files (compiled DLL and static 
> > library) as well as a few hundred header files as artifacts for use by 
> > the next project in the dependency chain. Copying these in and out of 
> > workspaces takes quite a long time, and the network link is not even 
> > near capacity, so presumably handling of multiple small files is not 
> > really efficient. 
> > 
> > Can this be optimized somehow, e.g. by packing and unpacking the files 
> > for transfer? Manual inspection of artifacts is secondary, I think. 
> > 
> >    Simon 
> > 
>
> -- 
> This e-mail and any attachments may contain confidential, copyright and or 
> privileged material, and are for the use of the intended addressee only. If 
> you are not the intended addressee or an authorised recipient of the 
> addressee please notify us of receipt by returning the e-mail and do not 
> use, copy, retain, distribute or disclose the information in or attached to 
> the e-mail.
> Any opinions expressed within this e-mail are those of the individual and 
> not necessarily of Diamond Light Source Ltd. 
> Diamond Light Source Ltd. cannot guarantee that this e-mail or any 
> attachments are free from viruses and we cannot accept liability for any 
> damage which you may sustain as a result of software viruses which may be 
> transmitted in or with the message.
> Diamond Light Source Limited (company no. 4375679). Registered in England 
> and Wales with its registered office at Diamond House, Harwell Science and 
> Innovation Campus, Didcot, Oxfordshire, OX11 0DE, United Kingdom
>

-- 
You received this message because you are subscribed to the Google Groups 
"Jenkins Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to jenkinsci-users+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/jenkinsci-users/94527fee-60cd-4413-864f-822f469c8af6%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Reply via email to