> On Jan 28, 2020, at 8:02 PM, Chris Lambertus <c...@apache.org> wrote:
> 
> 
> Allen, can you elaborate on what a “proper” implementation is?  As far as I 
> know, this is baked into jenkins. We could raise process limits for the 
> jenkins user, but these situations only tend to arise when a build has gone 
> off the rails.
> 

        You are correct: the limitations come from the implementation of the 
jenkins slave jar. Ideally it would run the slave.jar as one user and executors 
as one or more users.  Or at least use cgroups on Linux and RBAC on Solaris and 
jails on FreeBSD and ... to at least do a minimal amount of work to protect 
itself.  Instead, it depends upon the good will of spawned processes to not 
shoot it or anything else running on the box.  This works great for the 
absolutely simple case, but completely false apart for anything beyond running 
a handful of shell commands.

        Thus why I consider it idiotic.  There are ways Jenkins could have done 
some work to prevent this situation from occurring, but alas that is not the 
case.  Yes, it would require more setup of the client, but for those places 
that need (i.e., most) it would have been worth it. 

        Instead, on-prem operators are pretty much forced to build a ton of 
complex machinery to prevent users from wreaking havoc. [1] Or give up and move 
to either Jenkins talking to cloud or dump Jenkins entirely.


[1] - The best on-prem solution I came up with  (before I moved my $DAYJOB 
stuff to cloud) was to run each executor in a VM on the box.  That VM would 
also have a regularly scheduled job that would cause it to wipe itself and 
respawn via a trigger mechanism.  Yeah, completely sucks, but at least it 
affords a lot more safety. 

Reply via email to