Joe,
I will send my 1.2.8 compile log later today.
Tony
Anthony C. Iannetti, P.E.
NASA Glenn Research Center
Aeropropulsion Division, Combustion Branch
21000 Brookpark Road, MS 5-10
Cleveland, OH 44135
phone: (216)433-5586
email: anthony.c.ianne...@nasa.gov
Please note: All opinions
Thankyou for your help.
I tried the command
mpirun -np 4 -host node1,node2 -mca btl tcp,self random
but still got the same result.
I'm pretty sure that the communication between the nodes is TCP but I'm not
sure, I've emailedIT support to ask them, but am yet to hear back from them.
Other than t
I have checked with IT. It is TCP. I have been told that there's a firewall on
the nodes. Should I open some ports on the firewall, and if so, which ones?
Robertson
>>> Robertson Burgess 5/02/2009 5:09 pm >>>
Thankyou for your help.
I tried the command
mpirun -np 4 -host node1,node2 -mca btl tc
Dear OpenMPI developer,
i have found a very strange behaviour of MPI_Test. I'm using OpenMPI
1.2 over Infiniband interconnection net.
I've tried to implement net check with a series of MPI_Irecv and
MPI_Send beetwen processors, testing with MPI_Wait the end of Irecv.
For strange reasons, i've not
Hi Gabriele
Shouldn't you reverse the order of your send and recv from
MPI_Irecv(buffer_recv, bufferLen, MPI_INT, recv_to, tag,
MPI_COMM_WORLD, &request);
MPI_Send(buffer_send, bufferLen, MPI_INT, send_to, tag, MPI_COMM_WORLD);
to
MPI_Send(buffer_send, bufferLen, MPI_INT, send_to, ta
Hi Jody,
thanks four your quick reply. But what's the difference?
2009/2/5 jody :
> Hi Gabriele
>
> Shouldn't you reverse the order of your send and recv from
>MPI_Irecv(buffer_recv, bufferLen, MPI_INT, recv_to, tag,
> MPI_COMM_WORLD, &request);
>MPI_Send(buffer_send, bufferLen, MPI_INT, s
I have to admit, that this wasn't a theoretically well founded suggestion.
Perhaps it really doesn't (or shouldn't) matter...
I'll try both versions with MPI 1.3 and tell you the results
Jody
On Thu, Feb 5, 2009 at 11:48 AM, Gabriele Fatigati wrote:
> Hi Jody,
> thanks four your quick reply. But
Hi Gabriele
In OpenMPI 1.3 it doesn't matter:
[jody@aim-plankton ~]$ mpirun -np 4 mpi_test5
aim-plankton.uzh.ch: rank 0 : MPI_Test # 0 ok. [3...3]
aim-plankton.uzh.ch: rank 1 : MPI_Test # 0 ok. [0...0]
aim-plankton.uzh.ch: rank 2 : MPI_Test # 0 ok. [1...1]
aim-plankton.uzh.ch: rank 3 : MPI_Tes
One difference is that putting a blocking send before the irecv is a
classic "unsafe" MPI program. It depends on eager send buffering to
complete the MPI_Send so the MPI_Irecv can be posted. The example with
MPI_Send first would be allowed to hang.
The original program is correct and safe MPI.
Hi Ralph
Thanks - i downloaded and installed openmpi-1.4a1r20435 and
now everything works as it should:
--output-filename : all processes write their outputs to the correct files
--xterm : all specified processes opened their xterms
I started my application with --xterm as i wrote in
I'm trying to run a job based on openmpi. For some reason, the program and the
global communicator are not in sync and it reads that there is only one
processors, whereas, there should be 2 or more. Any advice on where to look?
Here is my PBS script. Thanx!
PBS SCRIPT:
#!/bin/sh
### Set the
11 matches
Mail list logo