On Nov 21, 2005, at 4:44 PM, Enrique Curchitser wrote:
OK, I'll take you up on the offer. I have 4 Power Mac G5's on a
private network connected through a GigE switch. Even for large
problems
the communications are slugish. This same code has shown to scale
to upwards of 128 processors on
This is not a bug just wonder if this can be improved. I have been
running openmpi linked program with command
/bin/mpirun --prefix \
--host A -np N a.out
My understanding is that --prefix allows extra search path in addition to
PATH and LD_LIBRARY_PATH, corre
Yes. I actually announced last night on the devel list that 1.0.1 will
be forthcoming shortly. During the release process, I goofed and
accidentally left out a shared memory bug fix (it's on the trunk; it
didn't make it to the v1.0 branch before release). The bug only shows
up on specific pl
Thanks for fixing this. Will there be a patched release of openmpi
that will contain this fix anytime soon ? (In the meantime, I could do a
anonymous read on the svn repository.)
A.Chan
On Tue, 22 Nov 2005, Jeff Squyres wrote:
> Whoops! We forgot to instantiate these -- thanks for catching t
Whoops! We forgot to instantiate these -- thanks for catching that.
I have just committed fixes to both the trunk and the v1.0 branch.
This also prompted the addition of the following text in the README
file:
-
- Open MPI will build bindings suitable for all common forms of
Fortran 77
Hi
Linking the following program with mpicc from openmpi-1.0 compiled
with gcc-4.0 on a IA32 linux box
*
#include
#include "mpi.h"
int main() {
int argc; char **argv;
MPI_Fint *f_status;
;
MPI_Init(&argc, &argv);
f_status = MPI_F_STATUS