> On Jun 6, 2016, at 8:32 AM, Rob Nagler <openmpi-wo...@q33.us> wrote: > > Thanks, John. I sometimes wonder if I'm the only one out there with this > particular problem.
FWIW: I haven’t seen it before. > > Ralph, thanks for sticking with me. :) Using a pool of uids doesn't really > work due to the way cgroups/containers works. It also would require changing > the permissions of all of the user's files, which would create issues for > Jupyter/Hub's access to the files, which is used for in situ monitoring. Not sure I understand the issue, but I have no knowledge of Jupyter or why you are using it. From what I can see, it appears that your choice of tools may be complicating your solution - I’d suggest perhaps focusing on solving the problem rather than trying to force-fit your current tools, but that presumes you don’t have some particular attachment to those tools. > > Docker does not yet handle uid mapping at the container level (1.10 added > mappings for the daemon). We have solved this problem > <https://github.com/radiasoft/containers/blob/fc63d3c0d2ffe7e8a80ed1e2a8dc44a33c08cb46/bin/build-docker.sh#L110> > by adding a uid/gid switcher at container startup for our images. The trick > is to change the uid/gid of the "container user" with usermod and groupmod. > This only works, however, with images we provide. I'd like a solution that > allows us to start arbitrary/unsafe images, relying on cgroups to their job. That isn’t the security hole - the issue is that Docker doesn’t prevent the user from taking privileged state, which means the user can become root. Yes, it is within that container - but the network and other 3rd party services can be vulnerable. Cgroups doesn’t really solve that problem as it still thinks the user is the one you originally set for the container, and constrains resources that way - but it doesn’t do authentication protection. > > Gilles, the containers do lock the user down, but the problem is that the > file system space has to be dynamically bound to the containers across the > cluster. JuptyerHub solves this problem by understanding the concept of a > user, and providing a hook to change the directory to be mounted. > > Daniel, we've had bad experiences with ZoL. It's allocation algorithm > degrades rapidly when the file system gets over 80% full. It still is not > integrated into major distros, which leads to dkms nightmares on system > upgrades. I don't really see Flocker as helping in this regard, because the > problem is the scheduler, not the file system. We know which directory we > have to mount from the cluster file system, just need to get the scheduler to > allow us to mount that with the container that is running slurmd. > > I'll play with Slurm Elastic Compute this week to see how it works. > > Rob > > _______________________________________________ > users mailing list > us...@open-mpi.org > Subscription: https://www.open-mpi.org/mailman/listinfo.cgi/users > Link to this post: > http://www.open-mpi.org/community/lists/users/2016/06/29382.php