Mark, thanks for the link.
i tried to read between the lines, and "found" that in the case of torque+munge, munge might be required only on admin nodes and submission hosts (which could be restricted to login nodes on most systems) on the other hand, slurm does require munge on compute nodes, even if there is no job submission from compute nodes. Cheers, Gilles On 2015/03/26 17:33, Mark Santcroos wrote: > Hi guys, > > Thanks for the follow-up. > > It appears that you are ruling out that Munge is required because the system > runs TORQUE, but as far as I can see Munge is/can be used by both SLURM and > TORQUE. > (http://docs.adaptivecomputing.com/torque/4-0-2/Content/topics/1-installConfig/serverConfig.htm#usingMUNGEAuth) > > If I misunderstood the drift, please ignore ;-) > > Mark > > >> On 26 Mar 2015, at 5:38 , Gilles Gouaillardet >> <gilles.gouaillar...@iferc.org> wrote: >> >> On 2015/03/26 13:00, Ralph Castain wrote: >>> Well, I did some digging around, and this PR looks like the right solution. >> ok then :-) >> >> following stuff is not directly related to ompi, but you might want to >> comment on that anyway ... >>> Second, the running of munge on the IO nodes is not only okay but required >>> by Luster. >> this is the first time i hear that. >> i googled "lustre munge" and could not find any relevant info about that. >> is this a future feature of Lustre ? >> as far as i am concerned, only Lustre MDS need a "unix" authentication >> system >> (ldap, nis, /etc/passwd, ...) and munge does not provide this service. >>> Future systems are increasingly going to run the user’s job script >>> (including mpirun) on the IO nodes as this (a) frees up the login node for >>> interactive editing, and (b) avoids the jitter introduced by running the >>> job script on the same node as application procs, or wasting a compute node >>> to just run the job script. >> that does make sense not to run the script on a compute node. >> but once again i am surprised ... as far as i am concerned, lustre IO >> nodes (MDS and/or OSS) do not mount the filesystem >> (i mean you cannot access the filesystem as if you were on a lustre client). >> of course, you can write your script so it does not require any access >> to the lustre filesystem, but that sounds like a lot of pain >> for a small benefit. >> /* that is specific to Lustre. GPFS for example can access the >> filesystem from an IO node */ >> >> Cheers, >> >> Gilles >> _______________________________________________ >> users mailing list >> us...@open-mpi.org >> Subscription: http://www.open-mpi.org/mailman/listinfo.cgi/users >> Link to this post: >> http://www.open-mpi.org/community/lists/users/2015/03/26539.php > _______________________________________________ > users mailing list > us...@open-mpi.org > Subscription: http://www.open-mpi.org/mailman/listinfo.cgi/users > Link to this post: > http://www.open-mpi.org/community/lists/users/2015/03/26540.php