Here is some more info. The build works if I do either of:
(1) Build with PGI v7.1-3 instead of PGI v7.0-3
(2) Or, drop the "-g" option in CXXFLAGS, i.e.,
change:
CXXFLAGS="-Msignextend -g -O2"
to just:
CXXFLAGS="-Msignextend -O2"
I'd still like to know if there is a better fix (I need
Hello,
I'm trying to build OpenMPI v1.2.4 with the PGI v7.0-6 compilers on an
Opteron cluster. It fails during the configure trying to check the size
of a boolean datatype. I've included details below.
Anyone know how to resolve this problem?
Thanks,
-Adam Moody
MPI Support
Lawrence Livermor
Hi Kay,
Sorry for the delay in replying, looks like this one slipped through.
The dynamic process management should work fine on GM.
Hope this helps,
Tim
kay kay wrote:
I am looking for dynamic process management support (e.g.MPI_Comm_spawn)
on Myrinet platform. From the Myricom website, i
Usually, this means a mismatch between the Open MPI installation on
the head node and the one on the compute nodes. A quick ompi_info (on
the front end as well as the back end) will show you the version
number of the installed release.
Thanks,
george.
On Jan 31, 2008, at 11:09 AM, Br
Hello,
I am trying to set up clustalw-mpi on a local cluster. But I am having
several problems that a search through the FAQ and mailing list were not
able to solve.
I have installed open-mpi in the front end of the cluster, my doubt is if I
also to install mpi in all the nodes I will use.
Curr
On Mon, Jan 28, 2008 at 03:26:14PM -0800, R C wrote:
> Hi,
> I compiled a molecular dynamics program DLPOLY3.09 on an AMD64 cluster
> running
> openmpi 1.2.4 with Portland group compilers.The program seems to run alright,
> however, each processor outputs:
>
> ADIOI_GEN_DELETE (line 22): **io N
On Tue, Jan 22, 2008 at 11:25:25AM -0500, Brock Palen wrote:
> Has anyone had trouble using flash with openmpi? We get segfaults
> when flash tries to write checkpoints.
segfaults are good if you also get core files. do the backtraces
from those core files look at all interesting?
==rob
--
On Fri, Jan 18, 2008 at 07:44:12PM -0500, Jeff Squyres wrote:
> FWIW, you might want to ask the ROMIO maintainers if this is a known
> problem. I unfortunately have no idea. :-\
Sorry, we're not much more help either... I know hdf5+pvfs+openMPI works.
What if you run the test programs in the
Thank you for the response and help.
I have now compiled the openmpi and WRF again and magically there was no
errors. I think that previously I forgot to load ifort+icc variables. So this
could be source of the problems
I hope that now everything will go without any problems.
Thank you again.
On Wed, 2008-01-30 at 10:01 -0600, Backlund, Daniel wrote:
> Jeff, thank your for your suggestion, I am sure that the correct mpif.h is
> being included. One
> thing that I did not do in my original message was submit the job to SGE. I
> did that and the
> program still failed with the same seg
10 matches
Mail list logo