Dear Mike, it sounds good... the description fits my purposes... I really miss 
this when I was reading srun man page! I will give it a try

Thanks to everybody for the help and support!

F

> On Aug 21, 2014, at 7:58 PM, Mike Dubman <mi...@dev.mellanox.co.il> wrote:
> 
> Hi FIlippo,
> 
> I think you can use SLURM_LOCALID var (at least with slurm v14.03.4-2)
> 
> $srun -N2 --ntasks-per-node 3  env |grep SLURM_LOCALID
> SLURM_LOCALID=1
> SLURM_LOCALID=2
> SLURM_LOCALID=0
> SLURM_LOCALID=0
> SLURM_LOCALID=1
> SLURM_LOCALID=2
> $
> 
> Kind Regards,
> M
> 
> 
> On Thu, Aug 21, 2014 at 9:27 PM, Ralph Castain <r...@open-mpi.org> wrote:
> 
> On Aug 21, 2014, at 10:58 AM, Filippo Spiga <spiga.fili...@gmail.com> wrote:
> 
>> Dear Ralph
>> 
>> On Aug 21, 2014, at 2:30 PM, Ralph Castain <r...@open-mpi.org> wrote:
>>> I'm afraid that none of the mapping or binding options would be available 
>>> under srun as those only work via mpirun. You can pass MCA params in the 
>>> environment of course, or in default MCA param files.
>> 
>> I understand. I hopefully be able to still pass the LAMA mca options as 
>> environment variables
> 
> I'm afraid not - LAMA doesn't exist in Slurm, only in mpirun itself
> 
>> ....I fear by default srun completely takes over the process binding.
>> 
>> 
>> I got another problem. On my cluster I have two GPU and two Ivy Bridge 
>> processors. To maximize the PCIe bandwidth I want to allocate GPU 0 to 
>> socket 0 and GPU 1 to socket 1. I use a script like this
>> 
>> #!/bin/bash
>> lrank=$OMPI_COMM_WORLD_LOCAL_RANK
>> case ${lrank} in
>> 0)
>>  export CUDA_VISIBLE_DEVICES=0
>>  "$@"
>> ;;
>> 1)
>>  export CUDA_VISIBLE_DEVICES=1
>>  "$@"
>> ;;
>> esac
>> 
>> 
>> But OMP_COMM_WORLD_LOCAL_RANK is not defined is I use srun with PMI2 as 
>> luncher. Is there any equivalent option/environment variable that will help 
>> me achieve the same result?
> 
> I'm afraid not - that's something we added. I'm unaware of any similar envar 
> from Slurm, I'm afraid
> 
> 
>> 
>> Thanks in advance!
>> F
>> 
>> --
>> Mr. Filippo SPIGA, M.Sc.
>> http://filippospiga.info ~ skype: filippo.spiga
>> 
>> «Nobody will drive us out of Cantor's paradise.» ~ David Hilbert
>> 
>> *****
>> Disclaimer: "Please note this message and any attachments are CONFIDENTIAL 
>> and may be privileged or otherwise protected from disclosure. The contents 
>> are not to be disclosed to anyone other than the addressee. Unauthorized 
>> recipients are requested to preserve this confidentiality and to advise the 
>> sender immediately of any error in transmission."
>> 
>> 
>> _______________________________________________
>> users mailing list
>> us...@open-mpi.org
>> Subscription: http://www.open-mpi.org/mailman/listinfo.cgi/users
>> Link to this post: 
>> http://www.open-mpi.org/community/lists/users/2014/08/25119.php
> 
> 
> _______________________________________________
> users mailing list
> us...@open-mpi.org
> Subscription: http://www.open-mpi.org/mailman/listinfo.cgi/users
> Link to this post: 
> http://www.open-mpi.org/community/lists/users/2014/08/25120.php
> 
> _______________________________________________
> users mailing list
> us...@open-mpi.org
> Subscription: http://www.open-mpi.org/mailman/listinfo.cgi/users
> Link to this post: 
> http://www.open-mpi.org/community/lists/users/2014/08/25121.php

--
Mr. Filippo SPIGA, M.Sc.
http://filippospiga.info ~ skype: filippo.spiga

«Nobody will drive us out of Cantor's paradise.» ~ David Hilbert

*****
Disclaimer: "Please note this message and any attachments are CONFIDENTIAL and 
may be privileged or otherwise protected from disclosure. The contents are not 
to be disclosed to anyone other than the addressee. Unauthorized recipients are 
requested to preserve this confidentiality and to advise the sender immediately 
of any error in transmission."


Reply via email to