I’m afraid I’m not familiar with JupyterHub at all, or Salt. All you really 
need is:

* a scheduler that understands the need to start all the procs at the same time 
- i.e., as a block

* wireup support for the MPI procs themselves

If JupyterHub can do the first, then you could just have it launch the set of 
ORTE daemons by creating a hostfile with the IP addresses of the Docket 
containers and using the “orte-dvm” command, and then use “mpiexec” to start 
the application against those daemons. The daemons would provide the wireup 
support. This could then be streamlined later by adding a plugin to ORTE to get 
the “names” of the Docker containers without putting them in a hostfile.

HTH
Ralph

> On Jun 2, 2016, at 3:58 PM, Rob Nagler <openmpi-wo...@q33.us> wrote:
> 
> We would like to use MPI on Docker with arbitrarily configured clusters (e.g. 
> created with StarCluster or bare metal). What I'm curious about is if there 
> is a queue manager that understands Docker, file systems, MPI, and OpenAuth. 
> JupyterHub does a lot of this, but it doesn't interface with MPI. Ideally, 
> we'd like users to be able to queue up jobs directly from JupyterHub.
> 
> Currently, we can configure and initiate an MPI-compatible Docker cluster 
> running on a VPC using Salt. What's missing is the ability to manage a queue 
> of these clusters. Here's a list of requirements:
> 
> JupyterHub users do not have Unix user ids
> Containers must be started as a non-root guest user (--user)
> JupyterHub user's data directory is mounted in container
> Data is shared via NFS or other cluster file system
> sshd runs in container for MPI as guest user
> Results have to be reported back to GitHub user
> MPI network must be visible (--net=host)
> Queue manager must be compatible with the above
> JupyterHub user is not allowed to interact with Docker directly
> Docker images are user selectable (from an approved list)
> Jupyter and MPI containers started from same image
> Know of a system which supports this?
> 
> Our code and config are open source, and your feedback would be greatly 
> appreciated.
> 
> Salt configuration: https://github.com/radiasoft/salt-conf 
> <https://github.com/radiasoft/salt-conf>
> Container builders: 
> https://github.com/radiasoft/containers/tree/master/radiasoft 
> <https://github.com/radiasoft/containers/tree/master/radiasoft>
> Early phase wiki: https://github.com/radiasoft/devops/wiki/DockerMPI 
> <https://github.com/radiasoft/devops/wiki/DockerMPI>
> 
> Thanks,
> Rob
> 
> _______________________________________________
> users mailing list
> us...@open-mpi.org
> Subscription: https://www.open-mpi.org/mailman/listinfo.cgi/users
> Link to this post: 
> http://www.open-mpi.org/community/lists/users/2016/06/29355.php

Reply via email to