Hello,
Regards,
Mahmood
___
users mailing list
users@lists.open-mpi.org
https://rfd.newmexicoconsortium.org/mailman/listinfo/users
Dave,
unless you are doing direct launch (for example, use 'srun' instead of
'mpirun' under SLURM),
this is the way Open MPI is working : mpirun will use whatever the
resource manager provides
in order to spawn the remote orted (tm with PBS, qrsh with SGE, srun
with SLURM, ...).
then m
Regards,
Mahmood
___
users mailing list
users@lists.open-mpi.org
https://rfd.newmexicoconsortium.org/mailman/listinfo/users
Hi,
I have two versions of a small program. In the first one the process with rank 0
calls the function "master()" and all other ranks call the function "slave()"
and in the second one I have two programs: one for the master task and another
one for the slave task. The run-time for the second v
Hi,
I've been able to install openmpi-v2.0.x-201707270322-239c439 and
openmpi-v2.x-201707271804-3b1e9fe on my "SUSE Linux Enterprise Server
12.2 (x86_64)" with Sun C 5.14 (Oracle Developer Studio 12.5).
Unfortunately "make" breaks for both versions with the same error,
if I use the latest Sun C 5
Hi,
I have stuck at a problem which I don't remember that on previous versions.
when I run a test program with -host, it works. I mean, the process spans
to the hosts I specified. However, when I specify -hostfile, it doesn't
work!!
mahmood@cluster:mpitest$ /share/apps/computer/openmpi-2.0.1/bin/
Hi Mahmood,
With the -hostfile case, Open MPI is trying to helpfully run things faster by
keeping both processes on one host. Ways to avoid this…
On the mpirun command line add:
-pernode (runs 1 process per node), oe
-npernode 1 , but these two has been deprecated in favor of the wonderful
OK. The next question is how touse it with torque (PBS)? currently we write
this directive
Nodes=1:ppn=2
which means 4 threads. Then we omit -np and -hostfile in the mpirun command.
On 31 Jul 2017 20:24, "Elken, Tom" wrote:
> Hi Mahmood,
>
>
>
> With the -hostfile case, Open MPI is trying to h
?? Doesn't that tell pbs to allocate 1 node with 2 slots on it? I don't see
where you get 4
Sent from my iPad
> On Jul 31, 2017, at 10:00 AM, Mahmood Naderan wrote:
>
> OK. The next question is how touse it with torque (PBS)? currently we write
> this directive
>
> Nodes=1:ppn=2
>
> which
Excuse me, my fault.. I meant
nodes=2:ppn=2
is 4 threads.
Regards,
Mahmood
On Mon, Jul 31, 2017 at 8:49 PM, r...@open-mpi.org wrote:
> ?? Doesn't that tell pbs to allocate 1 node with 2 slots on it? I don't
> see where you get 4
>
> Sent from my iPad
>
> On Jul 31, 2017, at 10:00 AM, Mahmo
“4 threads” In MPI, we refer to this as 4 ranks or 4 processes.
So what is your question? Are you getting errors with PBS?
-Tom
From: users [mailto:users-boun...@lists.open-mpi.org] On Behalf Of Mahmood
Naderan
Sent: Monday, July 31, 2017 9:27 AM
To: Open MPI Users
Subject: Re: [OMPI users
Hi
With nodes=2:ppn=2 Torque will provide two cores on two nodes each
for your job.
Open MPI will honor this, and work only on those nodes and cores.
Torque will put the list of node names (repeated twice each, since
you asked for two ppn/cores) in a "node file" that can be accessed
in your job
Well it is confusing!! As you can see, I added four nodes to the host file
(the same nodes are used by PBS). The --map-by ppr:1:node works well.
However, the PBS directive doesn't work
mahmood@cluster:mpitest$ /share/apps/computer/openmpi-2.0.1/bin/mpirun
-hostfile hosts --map-by ppr:1:node a.out
Maybe something is wrong with the Torque installation?
Or perhaps with the Open MPI + Torque integration?
1) Make sure your Open MPI was configured and compiled with the
Torque "tm" library of your Torque installation.
In other words:
configure --with-tm=/path/to/your/Torque/tm_library ...
2) C
Siegmar,
a noticeable difference is hello_1 does *not* sleep, whereas
hello_2_slave *does*
simply comment out the sleep(...) line, and performances will be identical
Cheers,
Gilles
On 7/31/2017 9:16 PM, Siegmar Gross wrote:
Hi,
I have two versions of a small program. In the first one th
15 matches
Mail list logo