Thanks to both you and David Gunter. I disabled pty support and
it now works.
There is still the issue of the mpirun default being "-byslot", which
causes all kinds of trouble. Only by using "-bynode" do things work
properly.
Daniel
On Thu, Apr 26, 2007 at 02:28:33PM -0600, gshipman wrote:
>
>From Jiming's error messages, it seems that he is using 1.1 libraries
and header files, while supposedly compiling for ompi 1.2,
therefore causing undefined stuff. Am I wrong in this assessment?
Daniel
On Fri, Apr 27, 2007 at 08:03:34AM -0400, Jeff Squyres wrote:
> This is quite odd; we have
This is quite odd; we have tested OMPI 1.1.x with the intel compilers
quite a bit. In particular, it seems to be complaining about
MPI_Fint and MPI_Comm, but these two types should have been
typedef'ed earlier in mpi.h.
Can you send along the information listed on the "Getting Help" page
This is quite odd; we have tested OMPI 1.1.x with the intel compilers
quite a bit. In particular, it seems to be complaining about
MPI_Fint and MPI_Comm, but these two types should have been
typedef'ed earlier in mpi.h.
Can you send along the information listed on the "Getting Help" page
Hello everyone,
I've found a bug trying to build openmpi 1.2.1 with progress threads
and gm btl support. Gcc had no problem with the missing header but
pgcc 7.0 complained. Check the attached patch.
Regards, Götz Waschk
--
AL I:40: Do what thou wilt shall be the whole of the Law.
--- openmpi-1.
Hello everyone,
I'm testing my new cluster installation with the hpcc benchmark and
openmpi 1.2.1 on RHEL5 32 bit. I have some trouble with using a
threaded BLAS implementation. I have tried ATLAS 3.7.30 compiled with
pthread support. It crashes as reported here:
http://sourceforge.net/tracker/in