Hi,
I am trying to understand this peculiar behavior where the communication
time in OpenMPI changes depending on the number of process elements (cores)
the process is bound to.
Is this expected?
Thank you,
saliya
--
Saliya Ekanayake
Ph.D. Candidate | Research Assistant
School of Informatics a
Can you please provide more details on your config, how test are
performed and the results ?
to be fair, you should only compare cases in which mpi tasks are bound
to the same sockets.
for example, if socket0 has core[0-7] and socket1 has core[8-15]
it is fair to compare {task0,task1} bound
Wow, I haven’t encountered Forth in over 20 years! Though I confess I used to
program in it myself back in my control days.
IIRC, you would need to write a wrapper to let Forth access C-based functions,
yes? You could configure and build OMPI as a 32-bit library, and libmpi.so is
C, so that isn
From: "RYAN RAY"Sent: Wed, 22 Jun 2016
14:32:33To: "users"Subject: OpenSHMEM
Runtime ErrorI have installed openmpi-1.10.1 in a system and while executing
one of the example codes of OpenSHMEM I am getting an error. The snapshot of
the error is attach
Ryan --
Did you try the suggestions listed in the help message?
> On Jun 23, 2016, at 1:24 AM, RYAN RAY wrote:
>
>
>
> From: "RYAN RAY"
> Sent: Wed, 22 Jun 2016 14:32:33
> To: "users"
> Subject: OpenSHMEM Runtime Error
>
> I have installed openmpi-1.10.1 in a system and while executing one
Greetings Richard.
Yes, that certainly is unusual. :-)
Here's my advice:
- Configure Open MPI with the --disable-dlopen flag. This will slurp in all of
Open MPI's plugins into the main library, and make things considerably simpler
for you.
- Build Open MPI in a 32 bit mode -- e.g., supply C
Thank you, Gilles for the quick response. The code comes from a clustering
application, bu let me try to explain simply what the pattern is. It's a
bit long than I expected.
The program has the pattern BSP pattern with *compute()* followed by
collective *allreduce()* And it does many iterations
Java uses *many* threads, simply
ls /proc//tasks
and you will be amazed at how many threads are used.
Here is my guess,
from the point of view of a given MPI process :
in case 1, the main thread and all the other threads do time sharing, so
basically, when an other thread is working, the mai
Thank you, this is really helpful. Yes, the other bookkeeping threads of
Java were what I worried too.
I think I can extract a part to make a c program to check.
I've got a quick question. Besides theses time sharing constraints, does
number of cores has any significance to MPI's communication d
On Jun 23, 2016, at 8:20 AM, Saliya Ekanayake wrote:
>
> I've got a quick question. Besides theses time sharing constraints, does
> number of cores has any significance to MPI's communication decisions?
Open MPI doesn't use the number of cores available to it in any calculations /
algorithm se
Ryan,
Four suggestions are provided in the help output. Please try these.
Josh
On Thu, Jun 23, 2016 at 1:25 AM, Jeff Squyres (jsquyres) wrote:
> Ryan --
>
> Did you try the suggestions listed in the help message?
>
>
> > On Jun 23, 2016, at 1:24 AM, RYAN RAY wrote:
> >
> >
> >
> > From: "RYAN
Hello Everyone!
I recently downloaded OpenFOAM and while attempting to use its parallel
features (which use mpi) I recieve the following error:
su2@su2-HP:~/OpenFOAM/su2-3.0.1/run/tutorials/incompressible/simpleFoam/motorBike_baseCase$
mpirun -np 4 simpleFoam -parallel
[su2-HP:21015] [[INVALI
Looks like you are getting mixes of OMPI installations between the nodes - try
ensuring that the PATH and LD_LIBRARY_PATH are correct on all the nodes
> On Jun 23, 2016, at 11:48 AM, Blair Climenhaga
> wrote:
>
> Hello Everyone!
>
> I recently downloaded OpenFOAM and while attempting to use i
Hi Ralph,
Thank you for your reply. How would I check that the PATH and LD_LIBRARY_PATH
are correct on all nodes? I have a feeling that this is a likely problem though
as the computer I am using has had many iterations of MPI installed on it and
likely in different locations.
All the best,
Bl
One easy solution: configure OMPI with —enable-orterun-prefix-by-default and it
will ensure that all the launched daemons and procs have the right setting on
the backend nodes. Or you can ssh to each node and print the relevant envars
and see what they say.
> On Jun 23, 2016, at 12:19 PM, Blai
I realize that OpenMPI wasn't made to run in a Firewall environment. I'd
like to try to get it to run in said environment though. So what *exact*
ports do I need to open to be able to run in a firewall environment? And
how can I set MPI to run on said ports?
Any help I would really appreciate.
Th
Both the runtime and TCP BTL components accept port range definitions. All you
have to do is tell us what those are, and then set your firewall to leave those
ports open.
So the cmd line would look like: mpirun -mca oob_tcp_dynamic_ipv4_ports
12345-12350 -mca btl_tcp_port_min_v4 34561 -mca btl_
17 matches
Mail list logo