I notice the following:
- you're creating an *enormous* array on the stack. you might be
better allocating it on the heap.
- the value of "exchanged" will quickly grow beyond 2^31 (i.e.,
MAX_INT) which is the max that the MPI API can handle. Bad Things can/
will happen beyond that value (i
hread reporting
matrix size 33554432 kB, time is in [us]
(and then it just hangs)
Vittorio
On Fri, Feb 27, 2009 at 6:00 PM, wrote:
>
> Date: Fri, 27 Feb 2009 08:22:17 -0700
> From: Ralph Castain
> Subject: Re: [OMPI users] TCP instead of openIB doesn't work
> To: Open MPI Users
I'm not entirely sure what is causing the problem here, but one thing
does stand out. You have specified two -host options for the same
application - this is not our normal syntax. The usual way of
specifying this would be:
mpirun --mca btl tcp,self -np 2 -host randori,tatami hostname
I'
Hello, i'm posting here another problem of my installation
I wanted to benchmark the differences between tcp and openib transport
if i run a simple non mpi application i get
randori ~ # mpirun --mca btl tcp,self -np 2 -host randori -host tatami
hostname
randori
tatami
but as soon as i switch to