Not sure exactly how it worked in he 1.x versions, but for pipeline, at
least, you usually have to either interpolate the variable into the string
of the shell script with double quotes:
sh "echo ${ARTIFACT_VERSION}"
Or you have to put the in the env object so they get injected into the
s
Node is a build step. Any build steps can be run in a declarative pipeline,
but you can also specify which node to use with 'agent' on a declarative
pipeline. Everything runs initially on a flyweight executor on the master.
Then it farms out the heavy lifting to the appropriate nodes you specify
I would look into your gc, first. Does jenkins write a garbage collection
log? I don't remember. You may need to just give it more memory.
On Friday, November 3, 2017 at 12:42:01 PM UTC-6, t3knoid wrote:
>
> I've recently upgraded Jenkins and plug-ins. After doing so, now every job
> configurat
Yeah, as you found, the environment is the sh is only present during that
run. It is just like dropping to a shell (or running another shell within a
shell), running some things, and then exiting.
If you want to set them globally, you will need to set them in an
environment block in declarativ
There are 2 different syntaxes. Declarative is the pipeline{} style.
scripted doesn't necessarily start with node{}, but uses that heavily.
Scripted syntax is basically a straight groovy code DSL. Use it when you
need to do more complex and interesting things and aren't afraid to get
down into
(sharedWorkspace){
>
> //do stuff
>
> }
>
> }
>
> node(‘B’){}
>
> node(‘A’){
>
> ws(sharedWorkspace){
>
> //do more stuff
>
> }
>
> }
>
>
>
>
>
> *From: *Robert Hales
> *Sent: *30 October 2017 03
If you have multiple nodes with a label of A, you will need to declare a
variable to hold the name of the specific node you ended up on in the first
"node" declaration, and then use that node in the third one:
def allocatedNode
node('A') { allocatedNode = $NODE_NAME } //NODE_NAME is an environ
set to offline in Build
> Executor Status in Jenkins webgui, and the job which was running before
> starting the command continues to run.
>
>
> So yeah, that sucks, because there doesn't seem to be a good way to
> offline a jenkins node gracefully.
>
>
>
> On Friday,
t;
>>>> Make sure to follow
>>>> https://wiki.jenkins.io/display/JENKINS/Running+Jenkins+behind+Nginx if
>>>> Nginx is configured as a a reverse proxy.
>>>>
>>>> Notably proxy_http_version 1.1; and proxy_request_buffering off; are
>
s
>
> root@jenkins:~# find / -name offline-node
>
> root@jenkins:~#
>
> root@jenkins:~# dpkg -l | grep jenkins
> ii jenkins 2.73.2 (...)
>
>
>
> On Friday, October 27, 2017 at 12:21:17 AM UTC+9, Robert Hales wrote:
>>
>
How are you calling the jobs? IE, how does the pipeline job trigger the
other pipeline or freestyle jobs?
On Thursday, October 26, 2017 at 10:11:59 AM UTC-6, Chris Overend wrote:
>
> I have:
>
> booleanParam(name: 'force_build', value: force_build.toBoolean())
>
> This works
> Pipeline with
In the CLI, use the 'offline-node' command. Another useful command in what
it looks like you want to do might be "wait-offline-node".
You could also create a groovy script to do it and run that from the REST
API.
On Thursday, October 26, 2017 at 3:35:29 AM UTC-6, Tomasz Chmielewski wrote:
>
>
There is no guarantee that you will get the same workspace on a node. It
will try, but a workspace can only be in use by 1 job at a time. So if you
require a specific workspace, you have a few options:
Define a custom workspace with the ws step. But I think 2 jobs still can't
use it at the sam
It is called Pipeline
On Wednesday, October 25, 2017 at 12:44:24 AM UTC-6, TheGrovesy wrote:
>
> I have a single Jenkins item which gets checked out and then has 6
> different build steps for different build variants. On the main Jenkins
> page this is shown as a single item status. It would be
de a script {}.
>
> I find this very very confusing as a new user to pipelines, the
> documentation seems all over the place :(
>
> Now I just need to stop it building the same commit that was push
> triggered when it Branch Indexes.
>
> On Wednesday, 25 October 2017
nux host agent, and
> one stage on window without issue
>
> On Tuesday, October 24, 2017 at 7:43:02 AM UTC-7, Robert Hales wrote:
>>
>> If you specify an agent for the entire pipeline, everything in that
>> pipeline will run in that agent. You need to specify 'ag
I use both these methods on jobs. Usually I use the Login/Password on https
urls. But it definitely works. Not sure how I can give you more information
to help in your situation. It just seems to be bad ID or PW/ssh key.
On Tuesday, October 24, 2017 at 8:48:10 AM UTC-6, Samuel Mutel wrote:
>
>
Does this work if you put it inside a stage{steps{script{}}}? Properties
are not supported like that in Declarative (as it say), but you may be able
to just run it in the script (basically dropping out of declarative).
On Tuesday, October 24, 2017 at 7:32:10 AM UTC-6, Andy Coates wrote:
>
> He
If you specify an agent for the entire pipeline, everything in that
pipeline will run in that agent. You need to specify 'agent none' on the
pipeline, then specify an agent on each individual stage.
On Tuesday, October 24, 2017 at 2:38:35 AM UTC-6, Dan Tran wrote:
>
> here is my sample pipeline
You can caputre the node name in the first time you use the node and then
specify it as the node label in the following node steps.
On Monday, October 23, 2017 at 2:44:37 PM UTC-6, Torsten Reinhard wrote:
>
> Hi,
>
> I have a large build & deploy pipeline with some stages, running on
> differe
Enclosing a 'build' step in a node can be roughly equivalent to enclosing
an ssh command in the node. That ssh command is kicking off some external
process. The build step is kicking off an external process. This seems like
completely normal behavior to me. The jobs are not related to each other
In reference to this: >>it seems strange to me that the pipeline "build"
step triggers jobs without node context. This implies web-interface driven
configuration of a job to be made and the cumbersome parameterized "build"
step syntax to be used to run the job at the proper node.
As far as I kn
You are probably talking freestyle vs. pipeline jobs. Pipeline are the
groovy scripted ones.
Really, the pipeline jobs SHOULD only be doing some simple logic and
running the a build script (such as an ant build.xml). The pipeline code is
a newer way to do it, and is a much more flexible way t
There are a lot of different ways to do this, but it depends on how your
pipeline script is written, whether you are using declarative or scripted
pipeline, what do you want to do when it is null vs. not. In a scripted
pipeline this is a basic programming question:
if (params.FOO) {
} else {
Your question is a little confusing, because the line in the text is
exactly the same as the line in your code. But I think you must mean that
it fails if you don't `def i`.
Yes, if you don't declare the variable, then it will be in something
similar to a global scope for this purpose. I think
The permission problem may also be that your Shell setting in Jenkins main
configuration is set to something invalid. Try removing that altogether so
that it defaults to your normal shell.
On Tuesday, October 17, 2017 at 11:03:56 AM UTC-6, Thor Waway wrote:
>
> Hello,
>
> This is a bit of new
I assume that the temp directory and such has something to do with Jenkin's
way of making the script pauseable and restartable (just a guess).
Either way, it is safer to provide the path to your script to execute. Just
reference it with $WORKSPACE/deploy-script.sh.
I'd be interested in heari
There is one guy that knows, but he is on a jungle trek in the amazon with
sketchy internet service. Nobody else knows.
On Sunday, October 15, 2017 at 3:08:12 AM UTC-6, Hemant Wanjari wrote:
>
> Hello Guyz,
>
> Anyone kows how to setup Jenkins slave in jenkins 2.73.2 version?
>
>
>
--
You rece
Looks like this was changed in Jenkins 1.421 (2011/07/17) (JENKINS-8446),
and only affects new installs of
Jenkins.
http://jenkins-ci.org/commit/jenkins/4f0ea9da03301e6c671523cee1b4cf9e40a64c38
On Saturday, October 14, 2017 at 11:22:52 PM UTC-6, Robert Hales wrote:
>
> You seem t
You seem to have answered your own question. By default, the workspaces are
in the workspace directory. The configuration for the jobs is in the jobs
directory. You shouldn't expect to find the workspace under the job itself.
Older versions of Jenkins had the workspace under the individual 'job
No, you can't set X in the shell and then read it in the script. Sometimes
I will get a variable by cat'ing the file back to the screen in another
shell step and capture the stdout in a variable.
On Saturday, October 14, 2017 at 4:06:51 AM UTC-6, Idan Adar wrote:
>
> script {
> ...
>
Instead of going to lastSuccessfulBuild/artifact/ in the url, go to
lastBuild/artifact
On Friday, October 13, 2017 at 6:56:33 AM UTC-6, Anna Freiholtz wrote:
>
> Hi Reinhold,
>
> Yes I can find them in each specific build. But I have test frameworks
> producing an error log on file. The file is
I posted a reply to this. It was a head twister, but I think I solved the
problem and learned some interesting things.
On Tuesday, October 10, 2017 at 8:58:44 AM UTC-6, gi...@ziprecruiter.com
wrote:
>
> same here :-\
>
> even created a StackOverflow issue for this:
>
> https://stackoverflow.com
https://groups.google.com/forum/#!topic/jenkinsci-users/yYeedbtXT4g
On Monday, October 9, 2017 at 1:38:24 AM UTC-6, Gilad Baruchian wrote:
>
>
> Did you find any solution for that? I have this problem as well
>
--
You received this message because you are subscribed to the Google Groups
"Jenki
It is hard to answer this without more detail on your jobs. i.e. Is this a
freestyle job? How and where is the environment variable set? Sounds to me
like a scope issue. For example, if you are setting the variable in one
shell script, then trying to get to it in another, it isn't going to work
Can you set your nagios to not alert so quickly? Make it have a couple more
failures or an extra minute between checks so it goes into a soft state for
a minute first.
Why can't you write a brief downtime to the nagios.cmd file in Jenkins
first?
You could disable notifications using a curl c
I don't know of any declarative support, but you can still wrap it in a
script{} block in declarative.
On Wednesday, October 4, 2017 at 10:27:01 AM UTC-6, dandeliondodgeball
wrote:
>
> Helpful, thanks.
>
> So there is only a scripted pipeline solution, no support for declarative,
> correct?
>
Sounds like you are trying to use Post in the wrong place - most likely in
a scripted pipeline. If you were using it correctly, but it wasn't
available, the error message would say something about an "Undefined
Section". You can't use "post" in a scripted pipeline. Instead, you have to
use a tr
https://www.cloudbees.com/blog/copying-artifacts-between-builds-jenkins-workflow
On Tuesday, October 3, 2017 at 4:15:47 PM UTC-6, dandeliondodgeball wrote:
>
> I discovered the Copy Artifact plugin and was all happy when it worked for
> me. But when I went back to my configuration which makes
The .txt file shows up on the PipelineSyntax/Global Variables Reference
page.
On Sunday, February 5, 2017 at 6:33:03 PM UTC-7, David Karr wrote:
>
> So I now have a handful of pipeline scripts all reusing some global
> methods, all of which I pasted into each script, so I've started to set up
I agree with this answer. With a pipeline, it is so easy to make logic
decisions and much easier to maintain a single Jenkinsfile that has all the
logic. There is surely going to be some duplicated logic, so now only the
special differences have to be handled, and you don't duplicate code across
I agree that polling is not the best use of resources, but I don't think
this option should be killed just because it may not be a good idea.
I use the PollSCM option in several multibranch jobs (Jenkins 2.32.2) and
it works just fine. Sometimes red tape and bureaucracy in a large
enterprise c
You should be able to use the params object in the environment{} block and
assign it to an environment variable.
--
You received this message because you are subscribed to the Google Groups
"Jenkins Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email
In your Branch Sources section of your Multibranch job, you just have to
add the "Suppress Automatic SCM Triggering". This will prevent the branch
indexing scan from kicking off a new build when it finds changes.
On Tuesday, September 12, 2017 at 8:56:32 AM UTC-6, ishan jain wrote:
>
> Hi all,
The Parameters can be defined in the pipeline, or in the job configuration,
but if you define it in the pipeline, it will update the job configuration
the first time it runs. So when you add the parameters to the pipeline, and
have none configured in your job, your Build Now button will still be
I don't have time to build a test of this at the moment, but I am pretty
sure you could build your credentials ID in code, based off of your
${env.REGION}, and pass that to the withCredentials like this:
withCredentials([
string(credentialsId: "$REGION_CRED", variable: 'PW').
]} { ... }
Either the syntax checker isn't smart enough, or the variables aren't
available like that until you get down into the stages. I am suspecting the
former, but just a guess.
This syntax should work fine, though:
SCRIPTS_PATH = "${env.WORKSPACE}/tools/Jenkins/PythonScripts"
On Thursday, Sept
You can use the *post{}* block within a stage in the declarative pipeline:
stage('Run Integration Tests') {
steps {
timeout(time: 30, unit: 'MINUTES') {
retry(1) {
build job: 'my-integration-test'
}
I would bet your java process is running into excess Garbage Collection,
causing the entire thing to hang. It probably isn't because of the pipeline
itself. Check your garbage collection when this happens to see if you are
hung with full collections.
On Tuesday, September 19, 2017 at 6:18:33 P
49 matches
Mail list logo