It appears you have found a bug in Libtool, the package we use for
hiding all the portability issues in building libraries. Thankfully,
Open MPI has been befriended by a Libtool developer and he provided a
workaround for your problem (Thanks Ralf!). If you configure with
the option LD="ld
Dear Brian and George,
We could not check your fix because the nightly tarball was not updated
accordingly ;-).
I have discovered today the 1.0.1 version on the open-mpi web page and
Francoise Roch tested it. The make goes a little further but still
fails. Plesase find the logs in attachment
Thanks for the update. I've fixed the next bug in subversion on the
trunk and it should be in tonight's nightly tarball. I should also
make it into v1.0.1, once that is made available.
Brian
On Dec 7, 2005, at 11:41 AM, Pierre Valiron wrote:
Thanks Brian and George,
Francoise Roch trie
Thanks Brian and George,
Francoise Roch tried the following version as you suggested
http://www.open-mpi.org/~brbarret/download/
openmpi-1.1a1r8384.tar.gz
Things go a little further but the make still fails.
Please find the logs attached.
Pierre.
Brian Barrett wrote:
On Dec 5, 2005, at
On Dec 5, 2005, at 4:05 PM, Pierre Valiron wrote:
I tried to experiment open-mpi on our Solaris10 v40z cluster hoping
it might surpass lam/mpi 7.1.2b28...
I used the following script to compile in 64 bit mode:
The configure runs fine and the make aborts very rapidly.
I attach the log for
George Bosilca wrote:
Pierre,
The problem seems to come from the fact that we do not detect how to
generate the assembly code for our atomic operations. As a result we fall
back on the gcc mode for 32 bits architectures.
Here is the corresponding output from the configure script:
checking if
Pierre,
The problem seems to come from the fact that we do not detect how to
generate the assembly code for our atomic operations. As a result we fall
back on the gcc mode for 32 bits architectures.
Here is the corresponding output from the configure script:
checking if cc supports GCC inline as
Hi,
I tried to experiment open-mpi on our Solaris10 v40z cluster hoping it
might surpass lam/mpi 7.1.2b28...
I used the following script to compile in 64 bit mode:
#! /bin/tcsh -v
setenv CC "cc"
setenv CXX "CC"
setenv FC "f95"
setenv CFLAGS "-O -xtarget=optero