Hi Ralph,
Thanks a lot for the fast response.
Could you give me more instructions on which command do I put
"--display-allocation" and "--display-map" with? mpirun? ./configure?...
Also,we have tested that in our PBS script, if we put node=1, the helloworld
works. But, when I put node=2 or mor
I'm afraid I have no idea - I've never seen a Torque version that old,
however, so it is quite possible that we don't work with it. It also looks
like it may have been modified (given the p2-aspen3 on the end), so I have
no idea how the system would behave.
First thing you could do is verify that
Hi All,
I am building open-mpi-1.3.2 on centos-3.4, with torque-1.1.0p2-aspen3 and
myrinet. I compiled it just fine with this configuration:
./configure --prefix=/home/software/ompi/1.3.2-pgi --with-gm=/usr/local/
--with-gm-libdir=/usr/local/lib64/ --enable-static --disable-shared
--with-tm=/us
Hi,
I'm having a problem with MPI_Gather in openMPI 1.3.3. The code that
fails here works fine with mpich1.2.5, mpich2 1.1 and hpmpi 2.2.5 (I'm
not blaming anyone, I just want to understand !). My code runs locally
on a
bi-pro, debian 32 bits, with 2 processes, and fails during an
MPI_Gather c
Hi,
Its C bindings and if I clear the picture a bit more, what it does is
partitioning the original matrix in to a set of sub matrices to be processed
by the a other processes. And it seems that the only option left is to
bundle off in to a temp buffer before sending as you have suggested. It
would
Which language bindings?
For Fortran, consider pack or reshape. (I *think* whether array
sections are bundled off into temporary, contiguous storage is
implementation-dependent.)
Isn't it easier to broadcast the size first?
On Tue, 2009-07-21 at 11:53 +0530, Prasadcse Perera wrote:
> Hi all,
>
Hi all,
I'm writing an application which requires sending some variable size of sub
matrices to a set of processes by a lead process who holds the original
matrix. Here, the matrices are square matrices and the receiving process
doesn't know the size of the receiving matrix. In MPI_Bcast, I have