Re: How can we customise jenkins email content?

2020-04-15 Thread Matt Sicker
Oh, you're using CloudBees Core. The plugin might not be installed in
your team master. I was referring to builds.a.o

On Wed, 15 Apr 2020 at 01:44, Mick Semb Wever  wrote:
>
> > Have you tried adding a config file through
> > https://builds.apache.org/job/JOB_NAME/configfiles/ ?
>
>
> Hi Matt, that must be from a plugin as I can't find any such url under
> the jobs in ci-cassandra.apache.org
>
>
> Otherwise there is value in having it scripted, though i did not wish
> to delve into xsl ever again.
>
> With the help I got here I've managed to mangle together the following
> text report format:
> https://lists.apache.org/thread.html/r80d13f7af706bf8dfbf2387fab46004c1fbd3917b7bc339c49e69aa8%40%3Cbuilds.cassandra.apache.org%3E



-- 
Matt Sicker 


Re: How can we customise jenkins email content?

2020-04-15 Thread Matt Sicker
Also, by using that, your PMC has admin access to that Jenkins
instance, so you can use the global email-templates directory
described in your original email. Otherwise, you can also use this
plugin I was referring to earlier:
https://plugins.jenkins.io/config-file-provider/

On Wed, 15 Apr 2020 at 09:43, Matt Sicker  wrote:
>
> Oh, you're using CloudBees Core. The plugin might not be installed in
> your team master. I was referring to builds.a.o
>
> On Wed, 15 Apr 2020 at 01:44, Mick Semb Wever  wrote:
> >
> > > Have you tried adding a config file through
> > > https://builds.apache.org/job/JOB_NAME/configfiles/ ?
> >
> >
> > Hi Matt, that must be from a plugin as I can't find any such url under
> > the jobs in ci-cassandra.apache.org
> >
> >
> > Otherwise there is value in having it scripted, though i did not wish
> > to delve into xsl ever again.
> >
> > With the help I got here I've managed to mangle together the following
> > text report format:
> > https://lists.apache.org/thread.html/r80d13f7af706bf8dfbf2387fab46004c1fbd3917b7bc339c49e69aa8%40%3Cbuilds.cassandra.apache.org%3E
>
>
>
> --
> Matt Sicker 



-- 
Matt Sicker 


Re: Jenkins

2020-04-15 Thread Alex Harui
Hmm.  I didn't think there were "sides".

The fact that Apache Royale has 100 steps has nothing to do with using a CI 
server to create release artifacts.  It is just because Royale not only 
distributes maven artifacts from 3 repos, but also two flavors of legacy 
artifacts are created and distributed via dist.a.o and NPM as well.  Our RMs 
have 100 steps even if they use a local machine.  The Maven artifacts are 
created in the first 20-ish commands.

I certainly hope I didn't claim that Royale's process is perfect and a 
reference model.  I've been trying to say that it now seems practical to use a 
CI server in the release process because there are reproducible binary plugins 
for Maven.  Royale is in fact doing that, but because reproducible binaries are 
relatively new, I would not say that what we have is a reference model.  We 
just have something that works.

To provide a bit more detail on what we've got working:  where an RM on a local 
machine typically does a release:branch, release:prepare, and release:perform, 
on a CI server, I think there needs to be more work between each step.  So you 
can't have a single CI job that runs all three steps, you have to have a CI job 
for each step, and do some work between steps and manually launch the next step.

That's because the CI server has to run release:branch with pushChanges=false 
because there shouldn't be any committer credentials on the CI server.  The RM 
has to login and push the changes by supplying username and password.  Then the 
CI server can run release:prepare, also with pushChanges=false.  And again, the 
RM has to login and push changes and the tag.

Then when the CI server runs release:perform, the flag skipGPG=true has to be 
set.  And we also add an altDeploymentDirectory.  That's because 
release:perform not only generates the binaries, but it signs them and uploads 
them to the staging server.  The CI server has to skip signing since the keys 
can't be stored on the server.  So what our CI job does is redirect the binary 
artifacts to an altDeploymentDirectory and use Jenkins to archive the 
altDeploymentDIrectory as build artifacts.  Then the RM downloads the artifacts 
to a local machine, gets the source package and builds it, then compares the 
jars that were built with the ones in the altDeploymentDIrectory.  If they 
match, the RM can sign the artifacts and then use Maven's Wagon to upload them 
to the staging server.

Ideas for improvement are welcome.  This is just the first thing that came to 
mind that worked and seemed secure and conformed to Apache's rules/policies.  
As you can see the RM still has to build the source code locally, all voters 
need to be able to build the source anyway, and the RM can theoretically save 
some time by skipping some/all tests because they were run on the CI server and 
the binaries are bit-for-bit identical.

HTH,
-Alex

On 4/14/20, 11:59 PM, "Christofer Dutz"  wrote:

Hi Gabriel,

I just have to jump in here ... as I saw you were only getting feedback 
from one side.

I did see that Cloudstack generally seems to be built with Maven, so I can 
give some input

There is absolutely nothing wrong with building dev snapshots on a CI 
server ... that's one of the things they were built for.

However when talking about releases the situation is different. Because you 
need a real persons credentials for:

- Pushing to Git repos
- Signing artifacts
- Deploying artifact to any form of remote repos

So these are the parts you can't automate and a RM will have to jump in 
manually.

So what's typically a 3 command release process on an ordinary maven 
project:

- branch
- prepare
- release

Has to be split up and divided into a sequence of steps where some can be 
done "automatic" (by clicking a button) and some have to be done manually.

I would also strongly suggest not to use the Apache Royale release process 
as a reference. It was intended to simplify things and currently is a process 
of about 100 steps that have to be executed in sequence ... some by clicking a 
button ... some by logging in on the CI server (which would probably be a 
problem on shared ASF build nodes).

I doubt this is something you would like to do. I. strongly suggest not to.

So please keep in mind that you can't have any non-technical user 
credentials on a shared CI server node ... you could probably do that if you 
have a dedicated Release Manager machine (Sort of you start it up, provide the 
credentials needed and after the Release the VM is discarded) but it's totally 
out of the question for a shared machine. And as Apache doesn't have technical 
users outside of infra, well I guess the only options you have are:

- dedicated RM machine
- very lengthy release process with lots of manual steps

Chris






Am 14.04.20, 22:44 schrieb "Gabriel Bräsc

Re: Jenkins

2020-04-15 Thread Vladimir Sitnikov
Alex>Ideas for improvement are welcome

Please don't get me wrong, however, I have the following feedback for
Apache JMeter (see [1]):

Milamber>Before, with the ant release way, ~21 command lines to execute to
Milamber>prepare the Vote email
Milamber>After, with the gradle way, 1 command line(*) : ./gradlew
prepareVote
Milamber>-Prc=4 -Pasf -PuseGpgCmd

I would recommend giving Gradle a try.
It makes it easy to create and maintain release-like automation sequences.

You might notice that JMeter is Ant -> Gradle migration, however, I have a
similar improvement for Apache Calcite with Maven -> Gradle.

[1]:
https://lists.apache.org/thread.html/c08780952b7fbd71eccff717cb494562723e11dc99ea1b4bb39af461%40%3Cdev.jmeter.apache.org%3E

Vladimir


Website publishing Jenkins job using Docker issue

2020-04-15 Thread Udi Meiri
Hi builds,

I'm trying to debug a website publishing issue for the Beam project.
We use a ruby:2.5 based docker image to generate the website. Here's a
successful website test job running on one of Beam's own Jenkins machines
(with extra debugging information):
https://builds.apache.org/job/beam_PreCommit_Website_Commit/3575/

The /repo container directory is using a "bind mount" mapped to the git
repo
(/home/jenkins/jenkins-slave/workspace/beam_PreCommit_Website_Commit/src).
On this successful run the ownership is the uid:gid of the host's jenkins
user:

drwxrwxr-x 18 1017 1018 4096 Apr 14 23:54 /repo


I believe this used to also work on "websites" (1-2 months ago), but is no
longer: https://builds.apache.org/job/beam_PostCommit_Website_Test_PR/17/

Specifically, I believe the issue is with the ownership of /repo, which is:

drwxr-xr-x 18 root root 4096 Apr 14 23:54 /repo


I would appreciate any help in this.
Is the Docker configuration on websites more strict? Perhaps a recent
change?
Is it using AppArmor or SELinux, and could that be the issue?


Re: Website publishing Jenkins job using Docker issue

2020-04-15 Thread Chris Lambertus
Please see my post to this list from last week regarding this issue and the fix:

https://lists.apache.org/thread.html/r00c669dd82bbde47958e81ecb330116de131d774e2d4df26a06fe92f%40
 





> On Apr 15, 2020, at 11:34 AM, Udi Meiri  wrote:
> 
> Hi builds,
> 
> I'm trying to debug a website publishing issue for the Beam project.
> We use a ruby:2.5 based docker image to generate the website. Here's a
> successful website test job running on one of Beam's own Jenkins machines
> (with extra debugging information):
> https://builds.apache.org/job/beam_PreCommit_Website_Commit/3575/
> 
> The /repo container directory is using a "bind mount" mapped to the git
> repo
> (/home/jenkins/jenkins-slave/workspace/beam_PreCommit_Website_Commit/src).
> On this successful run the ownership is the uid:gid of the host's jenkins
> user:
> 
> drwxrwxr-x 18 1017 1018 4096 Apr 14 23:54 /repo
> 
> 
> I believe this used to also work on "websites" (1-2 months ago), but is no
> longer: https://builds.apache.org/job/beam_PostCommit_Website_Test_PR/17/
> 
> Specifically, I believe the issue is with the ownership of /repo, which is:
> 
> drwxr-xr-x 18 root root 4096 Apr 14 23:54 /repo
> 
> 
> I would appreciate any help in this.
> Is the Docker configuration on websites more strict? Perhaps a recent
> change?
> Is it using AppArmor or SELinux, and could that be the issue?