![]() |
|
|
|
|
Issue Type:
|
New Feature
|
|
Affects Versions:
|
current |
|
Assignee:
|
Francis Upton
|
|
Components:
|
ec2 |
|
Created:
|
15/Jul/14 7:34 AM
|
|
Description:
|
If user misconfiguration or network issues cause an EC2 instance to launch successfully but fail to successfully come up and launch the agent - e.g. issues with an init script - then the EC2 plugin will just try to launch another one.
And another one. And another one, until it hits the per-instance limit or total instance count limit, whichever is first.
It seems that it'd be a good idea to keep track of whether a given node type launches a Jenkins slave within a user-specified timeout (say, 45 mins by default, given EC2's hourly billing). If it fails then the instance should be terminated and the node destroyed - rather than just being left running burning resources as the EC2 plugin currently does.
If a retry of the launch fails it'd be reasonable cause for marking that node type as broken until re-enabled by the admin.
Thoughts?
If this sounds like a reasonable idea, and there aren't any major architectural reasons that'd make it impractical, I'd be interested in working on it and/or potentially funding work on it if it's not too big a project.
|
|
Environment:
|
EC2-plugin 1.23
|
|
Project:
|
Jenkins
|
|
Priority:
|
Minor
|
|
Reporter:
|
Craig Ringer
|
|
|
|
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators.
For more information on JIRA, see: http://www.atlassian.com/software/jira
|
--
You received this message because you are subscribed to the Google Groups "Jenkins Issues" group.
To unsubscribe from this group and stop receiving emails from it, send an email to
[email protected].
For more options, visit
https://groups.google.com/d/optout.