On Mon, Jun 4, 2012 at 10:27 AM, Bram de Jong
<bram.dej...@samplesumo.com> wrote:
>>
>> I'd think in terms of jobs that build components and applications, not
>> so much in relationships to repositories.
>
> The problem is that this would mean 100 different jobs that all do the
> same thing (i.e. update 5 repositories - one of which is SUPER slow).
> Each job will have need approx 10GB of HD space to just have the
> repositories.

I don't understand 'update repositories' in the context of a build.
Normally your repositories are full of branches/tags and maybe even
other unrelated projects, and a build just checks out exactly the
version it needs (generally the latest trunk rev for CI work).   And
aside from that, jenkins will keep your last workspace and do an
update to pull only the changes in the next revision once you get
started (optionally, of course), and will send the jobs back to the
same slave as long as it is available.   If you need 100 x 10 GB to do
the work you need to do, that doesn't seem like a big problem these
days especially since you can easily spread it over several slaves.

> I.e. the overhead of having each of the 5 repos reproduced for each of
> the 100+ jobs would be immense.

Again, I don't understand "reproducing" a repo.  Why does the build
server ever care about anything except a checked-out workspace of
exactly the project/revision it is building?  Aren't you doing
checkouts over a network already?

>> Where the code is in subversion, you might use svn externals to pull
>> the components in instead of anything special in jenkins.
>
> But the code is spread in Git and Subversion, not just Subversion.
> It's a bit of a mixture.

If you typically pull all the source together for a single compile and
you want to trigger a new build when any code is touched you may have
to have separate jobs polling each repo for changes, then triggering
the upstream run.  To get started, though, I'd just do scheduled
nightly builds and start them manually in the web interface if you
know something is changed and want a run earlier.

If you build binary component objects separately, then pull them
together in the final applications, you can have jenkins archive the
build artifacts from one job, then use the copy artifact plugin to
install them where an upstream job needs them.

Normally you will need to manage versioning of each component too, so
it is hard to generalize about the best way to do it.   With
subversion, you can point your svn externals at specific tags to
control the component revisions that are included but you'll want
similar control from other sources so a library can be changed for one
applications needs while still having access to an older rev for
others that use it.

> Also, there is no way I can reorganize things:
> reorganizing the structure would mean fixing the build
> scripts/projects for at leadst 100 applications on 5 different
> platforms (if you count various version of visual studio as
> platforms)!

If they are already broken (in the sense of just accidentally working
because needed components happen to be in the right place already),
you should probably fix them so the work predictably when automated,
at least to the point where either jenkins or the build scripts
retrieve everything that is needed, one way or another.  One approach
would be to add a 'top-level' application build script that assembles
everything you need and runs the commands for the build so you don't
have to change the existing scripts.  Or mix/match this with
shell/batch/groovy scripts embedded in the jenkins job.

If the builds already work such that someone with a suitable toolset
installed can check out the source and run a command to build, then a
jenkins job can do that for you.   Jenkins can supply more logic than
that, but normally you want the build scripts at the top level and for
each component that is built separately to do the work and to be
included in the source version control so that the same thing happens
in a developer's local test run and in the CI version.

> What I described in my email is exactly what we do:
> 1. make a super folder
> 2. check out the 5 repos in there
> 3. all the apps in those 5 repos now build

Is all of this somehow connected inseparably?   Normally, builds would
be oriented towards versions of specific components and then
applications containing them, and the versions of different parts
would be allowed to advance at different rates.

> We don't create a new super folder for each application.

Super folders and whole repositories don't relate very well to
specific build jobs.

-- 
   Les Mikesell
     lesmikes...@gmail.com

Reply via email to