I've found if I use jgit instead of a system installed git, the webhook is
working as expected
We did see a different issue where git publisher failed when trying to push
tags during post-build:
The recommended git tool is: NONE
using credential xxx
Pushing tag v25.34 to repo origin
RefSpec is
:32:22 AM UTC-6 Ross Bender wrote:
> I've noticed the same happening every time. When I push to Github:
>
>- Job 1 successfully receives webhook and build is triggered.
>- Job 2 errors receiving webhook and build is not triggered. Github
>hook log shows &
roubleshoot the issue?
On Wednesday, November 25, 2020 at 7:05:58 PM UTC-6 Ross Bender wrote:
> Thanks for the reply Mark.
>
> We do have git installed/configured on the controller and the hook does
> succeed for some builds, but not all. The hook succeeds for other builds
> that als
r the PortableGit installer. Be sure that
> the location where command line git is being found has a copy of ssh.exe
> available in the bin directory adjacent to the location where git is located
>
>
> On Wed, Nov 25, 2020 at 1:15 PM Ross Bender wrote:
>
>> I have a build
I am having the same issue. I have an account and can log into
https://accounts.jenkins.io/, but when I try logging into
https://issues.jenkins.io/ I receive the error "Sorry, your username and
password are incorrect - please try again."
My username is l3ender. Can someone advise or assist? Tha
I have a build job configured for "GitHub hook trigger for GITScm polling".
The build succeeds when triggered manually, but always fails when it is
triggered via Github webhook!
Additionally, the other strange thing is I have a different build job
pointing to the same repository which also is c
able for argument types:
(org.jfrog.gradle.plugin.artifactory.ArtifactoryPluginBase$_addModuleInfoTask_closure1)
values:
[org.jfrog.gradle.plugin.artifactory.ArtifactoryPluginBase$_addModuleInfoTask_closure1@34f76b3]
*
Please let me know if you have any ideas on how to fix this issue in the
latest/later versions of the artifactory plugin.
Thanks,
nkins_home/jobs/docker-example/workspace# cat .3413c672/script.sh
#!/bin/sh -xe
true
Any hints? I'm reproducing with the following workflow script:
node {
withDockerContainer('ubuntu:latest') {
sh 'true'
}
}
Thanks,
Ross
[1] - https://issues.jenkins-ci.org/bro
ot 4096 Sep 28 15:39 ..
-rwxr-xr-x 1 root root 18 Sep 28 15:07 script.sh
root@ab9ed8e40baf:/var/jenkins_home/jobs/docker-example/workspace# cat
.3413c672/script.sh
#!/bin/sh -xe
true
Any hints?
Ross
[1] - https://issues.jenkins-ci.org/browse/JENKINS-28821
--
You received this message becaus
tempt
because that was the first time the Jenkins slave on that host had been
restarted since the installation of the outdated jar.
On Wednesday, June 10, 2015 at 3:16:14 PM UTC-7, Ross Oliver wrote:
>
> Greetings,
>
> I am running a Jenkins master and several slaves all on Mac OS 10.
ld step 'Execute shell' marked build as failure
Finished: FAILURE
This job also succeeds on other slaves. I've tried disconnecting and
reconnecting the slave several times, and verified the slave.jar file
removed and replaced each time. I cleared the .jenkins jar cache on both
I’m trying to set up a job with an incremental SCM trigger and a clean
workspace nightly trigger.
>From what I can see, the right way to do this is to set up both triggers
(SCM and Build Periodically), and then set “Delete workspace before build
starts” so that it checks the parameter BUILD_C
You have two jobs, Job1 calling Job2. The parameter $JOB_NAME is local to
each job, so in Job2 the value of $JOB_NAME is Job2.
Add an additional param in Job1 ($JOB_1_NAME=$JOB_NAME) and use it in Job2.
FYI, any value that's empty means that the parameter value is undefined.
Check spellings too.
I have a nifty Jenkins installation that was running on a Dell Laptop, with 3
windows slaves. The hard disk on the laptop died (MBR issue), so the Laptop is
being repaired.
I was able to migrate the Windoze Jenkins_HOME data to my iMac, and moved the
Jenkins master onto the iMac…. however I'm f
s, so using
slaves and SSD drives cuts build times from 1-2 hours to 5-10 minutes.
This would not be possible on a MULTI core CPU with 10 executors.
But setting slaves up may not be necessary in your case.
-Regards
Ross
--
You received this message because you are subscribed to the Google G
Howdy sports fans.
I'm working with Distributed builds, and I have a interesting problem.
I discovered that using SSD drives was a big help in speeding up the builds of
large files (my projects involve crunching large datasets for digital IC
testers).
Some of my build projects are 30Gb or larger.
This is a re-post with another tidbit regarding adjusting the skip=xx parameter
for different WINDOZE flavors.
One note, the For users doing builds in Microsoft Visual Studio 6, you might
wonder "Why do my builds fail?"
Well when you start MSDEV, you normally get all kinds of include/libs set
Just a FYI,
For users doing builds in Microsoft Visual Studio 6, you might wonder "Why do
my builds fail?"
Well when you start MSDEV, you normally get all kinds of include/libs setup
based on a registry for visual studio,
these environment variables are not set… sometimes not even by vsvcvars.bat
Hi Kamal,
Since your post appeared in google's archive as a continuation of
my thread, it gives the impression that you were looking at my mail
then replied with substituting your title for mine. If you didn't do
that, maybe it's a bug somewhere, my apologies.
Bill
Kamal Ahmed wrote:
> Bill,
>
19 matches
Mail list logo