You can improve performance by using --bind-to socket or --bind-to numa as
this will keep the process inside the same memory region. You can also help
separate the jobs by using the --cpuset to tell each job which cpus it
should use - we'll stay within that envelope.
On Tue, Aug 12, 2014 at 8:33
Am 12.08.2014 um 16:57 schrieb Antonio Rago:
> Brilliant, this works!
> However I’ve to say that it seems that it seems that code becomes slightly
> less performing.
> Is there a way to instruct mpirun on which core to use, and maybe create this
> map automatically with grid engine?
In the open
Brilliant, this works!
However I’ve to say that it seems that it seems that code becomes slightly less
performing.
Is there a way to instruct mpirun on which core to use, and maybe create this
map automatically with grid engine?
Thanks in advance
Antonio
On 12 Aug 2014, at 14:10, Jeff Squyres
The quick and dirty answer is that in the v1.8 series, Open MPI started binding
MPI processes to cores by default.
When you run 2 independent jobs on the same machine in the way in which you
described, the two jobs won't have knowledge of each other, and therefore they
will both starting bingin
Dear mailing list
I’m running into trouble in the configuration of the small cluster I’m managing.
I’ve installed openmpi-1.8.1 with gcc 4.7 on a Centos 6.5 with infiniband
support.
Compile and installation were all ok and i can compile and actually run
parallel jobs, both directly or by submitti