Hello,
I am running Ray (a distributed software in genomics) with Open-MPI on 2048
processes and everything runs fine. Ray has a any-to-any communication pattern.
To avoid using too much memory, I implemented a virtual message router.
Without the virtual message router, I get messages like thes
Ouch! Thanks - I'll fix that and check for any other missing entries (Jeff
is on a plane back from Europe today). Don't know when Jeff will want to
roll a replacement 1.6.3 release, but he can address that when he returns
to the airwaves.
On Thu, Sep 27, 2012 at 7:45 AM, Ake Sandgren wrote:
> On
On Wed, Sep 26, 2012 at 10:58 PM, Siegmar Gross <
siegmar.gr...@informatik.hs-fulda.de> wrote:
> Hi,
>
> the command works without linpc4 or with -mca btl ^sctp.
>
Excellent!
>
> mpiexec -np 4 -host rs0,sunpc4,linpc4 environ_mpi | & more
>
> [sunpc4.informatik.hs-fulda.de][[6074,1],2][../
>
>
>
On Thu, 2012-09-27 at 16:31 +0200, Ake Sandgren wrote:
> Hi!
>
> Building 1.6.1 and 1.6.2 i seem to be missing the actual fortran
> bindings for MPI_Op_commutative and a bunch of other functions.
>
> My configure is
> ./configure --enable-orterun-prefix-by-default --enable-cxx-exceptions
>
> Whe
Hi!
Building 1.6.1 and 1.6.2 i seem to be missing the actual fortran
bindings for MPI_Op_commutative and a bunch of other functions.
My configure is
./configure --enable-orterun-prefix-by-default --enable-cxx-exceptions
When looking in libmpi_f77.so there is no mpi_op_commutative_ defined.
mpi_i
Hi,
the command works without linpc4 or with -mca btl ^sctp.
mpiexec -np 4 -host rs0,sunpc4,linpc4 environ_mpi | & more
[sunpc4.informatik.hs-fulda.de][[6074,1],2][../
tyr hello_1 162 mpiexec -np 4 -host rs0,sunpc4 environ_mpi | & more
Now 3 slave tasks are sending their environment.
tyr he