Looking deeper, I believe we may have a race condition in the code. Sadly, that
error message is actually irrelevant, but causes the code to abort.
It can be triggered by race conditions in the app as well, but ultimately is
something we need to clean up.
On Jun 27, 2011, at 9:29 AM, Rodrigo O
On Jun 28, 2011, at 3:52 PM, ya...@adina.com wrote:
> Thanks, Ralph!
>
> a) Yes, I know I could use only IB by "--mca btl openib", but just
> want to make sure I am using IB interfaces. I am seeking an option
> to mpirun to print out the actual interconnect protocol, like --prot to
> mpirun i
On Jun 28, 2011, at 3:52 PM, ya...@adina.com wrote:
> Thanks, Ralph!
>
> a) Yes, I know I could use only IB by "--mca btl openib", but just
> want to make sure I am using IB interfaces. I am seeking an option
> to mpirun to print out the actual interconnect protocol, like --prot to
> mpirun i
Thanks, Ralph!
a) Yes, I know I could use only IB by "--mca btl openib", but just
want to make sure I am using IB interfaces. I am seeking an option
to mpirun to print out the actual interconnect protocol, like --prot to
mpirun in MPICH2.
b) Yes, my default shell is bash, but I run a c-shell s
How are you passing the port info between the server and client? You're hitting
a race condition between the two sides.
On Jun 27, 2011, at 9:29 AM, Rodrigo Oliveira wrote:
> Hi there.
> I am developing a server/client application using Open MPI 1.5.3. In a point
> of the server code I open a p
On Jun 28, 2011, at 9:05 AM, ya...@adina.com wrote:
> Hello All,
>
> I installed Open MPI 1.4.3 on our new HPC blades, with Infiniband
> interconnection.
>
> My system environments are as:
>
> 1)uname -a output:
> Linux gulftown 2.6.18-194.el5 #1 SMP Tue Mar 16 21:52:39 EDT
> 2010 x86_64 x
Hello all.
I have a heterogeneous network of InfiniBand-equipped hosts which are all
connected to the same backbone switch, an older SDR 10 Gb/s unit.
One set of nodes uses the Mellanox "ib_mthca" driver, while the other uses the
"mlx4" driver.
This is on Linux 2.6.32, with Open MPI 1.5.3 .
Hello All,
I installed Open MPI 1.4.3 on our new HPC blades, with Infiniband
interconnection.
My system environments are as:
1)uname -a output:
Linux gulftown 2.6.18-194.el5 #1 SMP Tue Mar 16 21:52:39 EDT
2010 x86_64 x86_64 x86_64 GNU/Linux
2) /home is mounted over all nodes, and mpirun is