Hi,
this is my point of view only,but using a single script (that you put into your 
repos make it easier to perform the build, I put my pipeline script into a 
separated folder). But you need to make sure your script is verbose enough to 
see where it has fail if anything goes wrong, sinlent and without output long 
script will be hard to understand where it has an issue with it.

Using multiple sh make it easier to see where it does fail since Jenkins will 
display every sh invoke.

You can also put a function that will run some sh command into groovy and load 
that file and execute a command from it. This leave more flexibility from 
jenkins (decouple jenkins from the task to do) but you still can invoke 
multiple sh command into that groovy script. So your repos can can contain a 
groovy entry point that the pipeline will load and invoke that script can call 
sh, sh scripts and/or groovy scripts as it please.

pipeline script --> repos groovy script --> calls (sh, groovy, shell scripts...)

That avoid high maintance jenkins pipeline, the repos is more self aware of his 
needs and can more easily changes between version.

I for one, use 3+ repos. 
1- The source code repos
2- The pipeline and build script repos (this can evolve aside form the source, 
so my build method can change and be applied to older source version, I use 
branch/tag when backward compatibility is broken or a specific version is 
needed for a particualr source branch)
3- My common groovy, scripts tooling between my repos
4- (optional) my unit tests are aside and can be run on multiple versions

This work well, I wish the shared library was more flexible and that I could 
more easily do file manipulation into groovy, but I managed some platform 
agnostic functions for most file/folder/path operations that I reused between 
project. This make my pipeline script free of thousand of if(isUnix()) and the 
like. My pipeline look the same for either MacOS/Windows/Linux.

Hope this can help you decide or plan you build architecture.

Jerome

-----Original Message-----
From: jenkinsci-users@googlegroups.com <jenkinsci-users@googlegroups.com> On 
Behalf Of Sébastien Hinderer
Sent: August 11, 2020 10:33 AM
To: jenkinsci-users@googlegroups.com
Subject: Pipeline design question

Dear all,

When a pipeline needs to run a sequence of several shell commands, I see 
several ways of doing that.

1. Several "sh" invocations.

2. One "sh" invocation that contains all the commands.

3. Having each "sh" invocation in its own step.

4. Putting all the commands in a script and invoking that script through the sh 
step.

Would someone be able to explain the pros and cons of these different 
approaches and to advice when to use which? Or is there perhaps a reference I 
should read?

Thanks,

Sébastien.

--
You received this message because you are subscribed to the Google Groups 
"Jenkins Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to jenkinsci-users+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/jenkinsci-users/20200811143317.GB117214%40om.localdomain.

-- 
You received this message because you are subscribed to the Google Groups 
"Jenkins Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to jenkinsci-users+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/jenkinsci-users/YTOPR0101MB23156D74FE088414421F1CC7CD450%40YTOPR0101MB2315.CANPRD01.PROD.OUTLOOK.COM.

Reply via email to