The test jobs we're running here are automated acceptance tests. Builds that pass can be promoted to the manual test environment.

For the part of the pipeline that I have outlined, we want to provide fast feedback to the developers without manual intervention and without introducing delays.

The Groovy script idea is interesting, but we'd rather tackle this with standard features if possible.

thanks
Nigel

On 24/09/12 17:17, krishna chaitanya kurnala wrote:
Recommend you to adapt/use the Build Promotion Plugin in your piepline, we can have manual interventions to "promote" Builds, instead of complicating the pipeline

Some ideas I can think for the Issues mentioned below are:

1) have a higher quiet period in jenkins or a longer polling schedule(polling frequency period > total avg duration of normal pipeline) (instead of Build Triggered for each commit)

2)Use a Post Build Grovy Script(Groovy post build plugin) to Fail the current Build if Upsteam/Downstream Jobs are in Progress.

Good Luck,

Krishna Chaitanya


On Sun, Sep 23, 2012 at 9:46 PM, Nigel Charman <nigel.charman...@gmail.com <mailto:nigel.charman...@gmail.com>> wrote:

    We are having problems with multiple pipeline instances running at
    the same time, resulting in jobs running out of order.

    Our development pipeline looks like this:

    backend-build -> backend-deploy
                       -> common-build -> appA-build -> appA-deploy ->
    appA-test
                                                -> appB-build ->
    appB-deploy -> appB-test

    where downstream jobs are triggered by the Parameterized Trigger
    plugin.

    We only have one environment to deploy and test in, so have locks
    on the *-deploy and *-test jobs.

    All of the *-build and *-test jobs can be triggered by SVN
    commits, using a SVN post-commit hook.

    We applied the following settings to all jobs:

     * "Block build when upstream project is building" so that, if
    triggered, the job would not start a new pipeline until builds
    initiated by upstream jobs had finished.
     * "Block build when downstream project is building" so that, if
    triggered, the job would not start a new pipeline until builds of
    downstream jobs had finished.

    Two specific issues we have found are:

    1.  "Block build when downstream project is building" allows build
    in queue to be started ahead of downstream jobs in the queue.

    A commit that triggered appA-build followed by a commit that
    triggered backend-build:

    /  appA-build //
    //  appA-deploy//
    /  backend-build
      common-build
      backend-deploy
      appB-build
      appB-deploy
    /  appA-test//
    /  appA-build
      appA-deploy
      appA-test
      appB-test

    where the builds in italics are triggered by the /appA-build/ SVN
    trigger. This caused errors to occur.

    In this case it seemed that "Block builds when downstream project
    is building" was not respected.


    2.  "Block build when upstream project is building" allows build
    in queue to be triggered ahead of upstream jobs in the queue.

    In this case, backend-build was manually triggered, followed by a
    commit to appA-build

      backend-build
      backend-deploy
    /  appA-build/
      common-build
      ...

    where the builds in italics are triggered by the /appA-build/ SVN
    trigger.

    In this case it seemed that "Block builds when upstream project is
    building" was not respected.

    Our assumption had been that these blocks would also respect jobs
    in the queue.

    Both of these sequences resulted in spurious failing builds,
    causing our developers to lose faith in the CI server.

    Ideally, we'd want each instance of the pipeline to build to
    completion, before actioning pipelines for other SCM changes. For
    instance by prioritising builds triggered by other builds ahead of
    other triggers.

    What options do we have for creating a robust pipeline that has
    triggers on multiple jobs?

    regards
    Nigel



Reply via email to