t:pe=N" as far as I remember.
Regards,
Tetsuya Mishima
2015/10/06 5:40:33、"users"さんは「Re: [OMPI users] Hybrid OpenMPI+OpenMP
tasks using SLURM」で書きました
Hmmm…okay, try -map-by socket:pe=4
We’ll still hit the asymmetric topology issue, but otherwise this should
work
On Oct 5, 2015, at
r:
>>>
>>> mpirun --map-by slot:pe=4 -n 4 ./affinity
>>>
>>> --------------------------
>>> There are not enough slots available in the system to satisfy the 4 slots
>>> that were requested by t
p-by core" does not work when pe=N > 1 is specified.
So, you should use "map-by slot:pe=N" as far as I remember.
Regards,
Tetsuya Mishima
2015/10/06 5:40:33、"users"さんは「Re: [OMPI users] Hybrid OpenMPI+OpenMP
tasks using SLURM」で書きました
Hmmm…okay, try -map-by socket:pe
Does anyone have any ideas? Should I record some logs to see what's going on?
>
> Thanks a lot!
>
> Marcin
>
>
>
>
>
>
> On 10/06/2015 01:04 AM, tmish...@jcity.maeda.co.jp wrote:
>> Hi Ralph, it's been a long time.
>>
>> The option &
> On Oct 6, 2015, at 12:41 PM, marcin.krotkiewski
> wrote:
>
>
> Ralph, maybe I was not precise - most likely --cpu_bind does not work on my
> system because it is disabled in SLURM, and is not caused by any problem in
> OpenMPI. I am not certain and I will have to investigate this further,
Ralph, maybe I was not precise - most likely --cpu_bind does not work on
my system because it is disabled in SLURM, and is not caused by any
problem in OpenMPI. I am not certain and I will have to investigate this
further, so please do not waste your time on this.
What do you mean by 'loss o
I’ll have to fix it later this week - out due to eye surgery today. Looks like
something didn’t get across to 1.10 as it should have. There are other
tradeoffs that occur when you go to direct launch (e.g., loss of dynamics
support) - may or may not be of concern to your usage.
> On Oct 6, 201
Thanks, Gilles. This is a good suggestion and I will pursue this
direction. The problem is that currently SLURM does not support
--cpu_bind on my system for whatever reasons. I may work towards turning
this option on if that will be necessary, but it would also be good to
be able to do it wit
t; as far as I remember.
Regards,
Tetsuya Mishima
2015/10/06 5:40:33、"users"さんは「Re: [OMPI users] Hybrid OpenMPI+OpenMP
tasks using SLURM」で書きました
Hmmm…okay, try -map-by socket:pe=4
We’ll still hit the asymmetric topology issue, but otherwise this should
work
On Oct 5, 2015, at 1:25 PM,
Marcin,
did you investigate direct launch (e.g. srun) instead of mpirun ?
for example, you can do
srun --ntasks=2 --cpus-per-task=4 -l grep Cpus_allowed_list
/proc/self/status
note, you might have to use the srun --cpu_bind option, and make sure
your slurm config does support that :
srun --n
I'm doing quite well, thank you. I'm involved in a big project and so very
busy now.
But I still try to keep watching these mailing lists.
Regards,
Tetsuya Mishima
2015/10/06 8:17:33、"users"さんは「Re: [OMPI users] Hybrid OpenMPI+OpenMP
tasks using SLURM」で書きました
> Ah, yes - th
ork when pe=N > 1 is specified.
> So, you should use "map-by slot:pe=N" as far as I remember.
>
> Regards,
> Tetsuya Mishima
>
> 2015/10/06 5:40:33、"users"さんは「Re: [OMPI users] Hybrid OpenMPI+OpenMP
> tasks using SLURM」で書きました
>> Hmmm…okay, t
Hi Ralph, it's been a long time.
The option "map-by core" does not work when pe=N > 1 is specified.
So, you should use "map-by slot:pe=N" as far as I remember.
Regards,
Tetsuya Mishima
2015/10/06 5:40:33、"users"さんは「Re: [OMPI users] Hybrid OpenMPI+OpenMP
tas
Hmmm…okay, try -map-by socket:pe=4
We’ll still hit the asymmetric topology issue, but otherwise this should work
> On Oct 5, 2015, at 1:25 PM, marcin.krotkiewski
> wrote:
>
> Ralph,
>
> Thank you for a fast response! Sounds very good, unfortunately I get an error:
>
> $ mpirun --map-by core
Ralph,
Thank you for a fast response! Sounds very good, unfortunately I get an
error:
$ mpirun --map-by core:pe=4 ./affinity
--
A request for multiple cpus-per-proc was given, but a directive
was also give to map to an obj
You would presently do:
mpirun —map-by core:pe=4
to get what you are seeking. If we don’t already set that qualifier when we see
“cpus_per_task”, then we probably should do so as there isn’t any reason to
make you set it twice (well, other than trying to track which envar slurm is
using now).
Yet another question about cpu binding under SLURM environment..
Short version: will OpenMPI support SLURM_CPUS_PER_TASK for the purpose
of cpu binding?
Full version: When you allocate a job like, e.g., this
salloc --ntasks=2 --cpus-per-task=4
SLURM will allocate 8 cores in total, 4 for eac
17 matches
Mail list logo