Re: Creating Jenkins slaves using kubernetes-plugin that restart on node failures

2017-10-23 Thread Art Baldini
Hi Sam, We are still trying to come up with the best work-around. Currently we are kicking off a the build/job and returning immediately. Then we have another job that monitors the status of the first job. Definitely not an ideal situation, but it works for now. On Wed, Oct 18, 2017 at 12:32 AM,

Re: Creating Jenkins slaves using kubernetes-plugin that restart on node failures

2017-10-18 Thread Sam Beckwith III
Howdy, Cooper99! I am encountering this as well, sir. If I may ask, how did you resolved this? You understand this but others may not: This is a tricky situation. Without the functionality to fail the job on disconnection from the node in this context, we end up in an endlessly suspended/wait

Re: Creating Jenkins slaves using kubernetes-plugin that restart on node failures

2017-08-29 Thread Cooper99
Hi Carlos, Thanks for the prompt reply. What I have seen is that when the node is deleted the slave/pod doesn't crash, it is just deleted. Then the Jenkins master just sits there waiting for the slave to return with the following output: Cannot contact default-6b0e4a2d33a: java.io.IOException:

Re: Creating Jenkins slaves using kubernetes-plugin that restart on node failures

2017-08-29 Thread Carlos Sanchez
It doesn't restart the agents because as soon as the agent crashes the build will fail. So there is no point in restarting them On Tue, Aug 29, 2017 at 5:30 PM, Cooper99 wrote: > I am new to Jenkins so this may be a simple question. I am using the > kubernetes-plugin to dynamically create Jenki

Creating Jenkins slaves using kubernetes-plugin that restart on node failures

2017-08-29 Thread Cooper99
I am new to Jenkins so this may be a simple question. I am using the kubernetes-plugin to dynamically create Jenkins slaves. The one thing I have noticed is that when using the plugin to create the slaves is if a node gets deleted the slave pod is running on, the slave pod is not restarted. I