On 31 August 2010 18:39, Patrik Jonsson wrote:
> Hi all,
>
> I'm have a C MPI code that I need to link into my C++ code. As usual,
> from my C++ code, I do
>
> extern "C" {
> #include "c-code.h"
> }
>
#include
extern "C" {
#include "c-code.h"
}
Would that be enough?
> where c-code.h includes,
It's not a bug - that is normal behavior. The processes are polling hard to
establish the connections as quickly as possible.
On Sep 1, 2010, at 7:24 PM, lyb wrote:
> Hi, All,
>
> I tested two sample applications on Windows 2003 Server, one use
> MPI_Comm_accept and other use MPI_Comm_connect
Hi, All,
I tested two sample applications on Windows 2003 Server, one use
MPI_Comm_accept and other use MPI_Comm_connect,
when run into MPI_Comm_accept or MPI_Comm_connect, the application use
100% one cpu core. Is it a bug or some wrong?
I tested with three version including Version 1.4 (st
Hi,
I am getting interested in this thread.
I'm looking for some solutions, where I can redirect a task/message (MPI_send)
to a particular process (say rank 1), which is in a queue (at rank 1) to
another process (say rank 2), if the queue is longer at rank 1.
How can I do it?
First of all, I
padb as a binary (it's a perl script) needs to exist on all nodes as it calls
orterun on itself, try installing it to a shared directory or copying padb to
/tmp on every node.
To access the message queues padb needs a compiled helper program which is
installed in $PREFIX/lib so I would recomme
We have ddt, but we do not have licenses to attach to the number of cores these
jobs run at.
I tried padb, but it fails,
Example:
ssh to root node for running MPI job:
/tmp/padb -Q -a
[nyx0862.engin.umich.edu:25054] [[22211,0],0]-[[25542,0],0] oob-tcp:
Communication retries exceeded. Can n
On 1 Sep 2010, at 21:13, Brock Palen wrote:
> I have a code for a user (namd if anyone cares) that on a specific case will
> lock up, a quick ltrace shows the processes doing Iprobes over and over, so
> this makes me think that a process someplace is blocking on communication.
>
> What is
On Wed, Aug 25, 2010 at 12:14 PM, Jeff Squyres wrote:
> It would simplify testing if you could get all the eth0's to be of one type
> and on the same subnet, and the same for eth1.
>
> Once you do that, try using just one of the networks by telling OMPI to use
> only one of the devices, somethin
I have a code for a user (namd if anyone cares) that on a specific case will
lock up, a quick ltrace shows the processes doing Iprobes over and over, so
this makes me think that a process someplace is blocking on communication.
What is the best way to look at message queues? To see what proc
Mpi send and recv are blocking, while you can exit bcast even if other
processes haven't receive the bcast yet. A general rule of thumb is
mpi calls are optimized and almost always perform better than if you
were to manage the communication youself.
On 9/1/10, ananda.mu...@wipro.com wrote:
> Hi
Hi Mohamed, Reuti, list
That issue with the pthread flag and PGI has been there for a while.
Actually, if I remember right, it is a glitch in libtool
(probably version dependent), not in OpenMPI.
The simplest workaround, pointed out by Orion Poplawski
here some time ago, is to configure OpenMPI
Hi
If I replace MPI_Bcast() with a paired MPI_Send() and MPI_Recv() calls,
what kind of impact does it have on the performance of the program? Are
there any benchmarks of MPI_Bcast() vs paired MPI_Send() and
MPI_Recv()??
Thanks
Ananda
Please do not print this email unless it is absolutely
Hi all,
In looking through documentation and searching I didn't come across this
anywhere. If it is common knowledge then skip this email:-)
If you set OPAL_PREFIX to something other than your build install, then
it also needs to be passed through the mpirun's -x flag:
mpirun -x OPAL_PREF
Hi,
Am 01.09.2010 um 00:40 schrieb mohamed makhyoun:
> Dear opem-mpi users:
>
> I have got the following error while compiling openmpi using pgf90 ver 9 and
> CC=gcc
>
> How I can run make and avoiding the -pthread flag ?.
>
> pgf90-Error-Unknown switch: -pthread
> make[4]: *** [libmpi_f90
Hi Gilbert,
Checksums are turned off by default. If you need checksums to be activated
add "-mca pml csum" to the mpirun command line.
Checksums are enabled only for inter-node communication. Intra-node
communication is typically over shared memory and hence checksum is disabled
for this case.
If y
15 matches
Mail list logo