Of interest, I'm seeing interesting problems w/ PGF90 not giving an
appropriate INTEGER*8 result with configure, either.
gerry
Jeff Squyres wrote:
The config.out and config.log you sent do not seem to match.
The configure output stops at the "checking size of Fortran 77
INTEGER*8..." test, w
There is no 1.3.1 RPM yet (only a 1.3 RPM) -- what file specifically
are you trying to build?
Could you try building one of the 1.3.1 nightly snapshot tarballs? I
*think* the problem you're seeing is a problem due to FORTIFY_SOURCE
in the VT code in 1.3 and should be fixed by now.
ht
Is it possible to attach to any of the MPI processes and see where it
is hung?
On Feb 19, 2009, at 5:09 PM, Jeff Pummill wrote:
I built a fresh version of lammps v29Jan09 against Open MPI 1.3
which in turn was built with Gnu compilers v4.2.4 on an Ubuntu 8.04
x86_64 box. This Open MPI bui
On Feb 19, 2009, at 8:24 PM, -Gim wrote:
Query in MPI : What mpi_gather does is take the data being sent by
the i th process and places it in i th location in the receive
buffer. Say, I need to place the sent data in i*10 th location in
the receive buffer? Is this possible at all or I
The config.out and config.log you sent do not seem to match.
The configure output stops at the "checking size of Fortran 77
INTEGER*8..." test, while the config.log stops at the "checking if
Fortran 77 compiler works..." test.
Can you double check that you sent the right files?
Also, pleas
I have a problem compiling MPI. I have attached the config output and
config.log here.
Cheerio,
Gim
ompi-output.tar.gz
Description: GNU Zip compressed data
Query in MPI : What mpi_gather does is take the data being sent by the i
th process and places it in i th location in the receive buffer. Say, I
need to place the sent data in i*10 th location in the receive buffer? Is
this possible at all or I have to use sent and recv ?
Cheerio,
Gim
Gus,
I'll give that a try real quick (or as quickly as the compiles can run.
I'd not thought of this solution. I've been context-switching too much
lately. I've gotta look at this for a gigabit cluster as well.
Thanks!
Gus Correa wrote:
Hi Gerry
You may need to compile a hybrid OpenMPI
u
Elvedin Trnjanin wrote:
That would be one way it dies, but we kept getting errors during
compilation without the compilation process exiting which is arguably
worse than the behavior you saw.
OpenMPI's mpicc doesn't support the -cc flag so it just passes it to
pgcc, which doesn't support it e
I built a fresh version of lammps v29Jan09 against Open MPI 1.3 which in
turn was built with Gnu compilers v4.2.4 on an Ubuntu 8.04 x86_64 box.
This Open MPI build was able to generate usable binaries such as XHPL
and NPB, but the lammps binary it generated was not usable.
I tried it with a co
I'm afraid I don't speak French (this is an english list), so I can
only guess what you're asking and what Jody replied, but it *looks*
like you don't have Open MPI installed in /opt/openmpi-1.3, either on
your local node, or perhaps on a remote node (debian1?).
On Feb 19, 2009, at 12:05 P
Excuse - j'ai pas fini ma reponse:
A tu etabli que $PATH et $LD_LIBRARY_PATH contiennent les values correctes
quand tu fais une connexion ssh sans login?
Essaye:
ssh debian1 printenv
tu devrais voir dans $PATH quelquechose comme /opt/openmpi-1.3/bin
et $LD_LIBRARY_PATH avec /opt/openmpi-1.3/lib
A tu etabli que $PATH et $LD_LIBRARY_PATH contiennent les values correctes
quand tu fais une connexion ssh sans login?
Essaye:
2009/2/19 Abderezak MEKFOULDJI :
> Bonjour,
> mon cluster est composé (pour l'instant) de 2 machines amd64 contenant le
> système debian 2.6 "version etch", le compilateu
Hi Gerry
You may need to compile a hybrid OpenMPI
using gcc for C, PGI f90 for Fortran on the OpenMPI configure script.
This should give you the required mpicc and mpif90 to do the job.
I guess this is what Elvedin meant on his message.
I have these hybrids for OpenMPI and MPICH2 here
(not Myrin
Bonjour,
mon cluster est composé (pour l'instant) de 2 machines amd64 contenant le
système debian 2.6 "version etch", le compilateur fortran d'intel(ifort) et
l'outil Open-mpi 1.3.
La connexion entre les 2 hôtes est bien établie et sécurisée grâce à ssh.
Sachant que j'ai mis le répertoire "openmpi-
Hi all:
I'm trying to build openmpi RPMs from the included spec file. The
build fails with:
gcc -DHAVE_CONFIG_H -I. -I.. -I../tools/opari/lib
-I../extlib/otf/otflib -I../extlib/otf/otflib -D_GNU_SOURCE
-DBINDIR=\"/opt/openmpi-gcc/1.3/bin\"
-DDATADIR=\"/opt/openmpi-gcc/1.3/share\" -DRFG -DVT_
Jeff:
You're correct. That was the incorrect config file. I've attached the correct
one as per the recommendations in the help page.
Thanks for your help
--- On Thu, 2/19/09, Jeff Squyres wrote:
From: Jeff Squyres
Subject: Re: [OMPI users] ptrdiff_t undefined error on intel 64bit mac
That would be one way it dies, but we kept getting errors during
compilation without the compilation process exiting which is arguably
worse than the behavior you saw.
OpenMPI's mpicc doesn't support the -cc flag so it just passes it to
pgcc, which doesn't support it either. The easy way to fi
On Feb 19, 2009, at 11:12 AM, Gerry Creager wrote:
Elvedin,
Yeah, I thought about that after finding a reference to this in the
archives, so I redirected the path to MPI toward the gnu-compiled
version. It died in THIS manner:
make[3]: Entering directory `/home/gerry/WRFv3/WRFV3/external/
Elvedin,
Yeah, I thought about that after finding a reference to this in the
archives, so I redirected the path to MPI toward the gnu-compiled
version. It died in THIS manner:
make[3]: Entering directory `/home/gerry/WRFv3/WRFV3/external/RSL_LITE'
mpicc -cc=gcc -DFSEEKO64_OK -w -O3 -DDM_PARA
WRF almost requires that you use gcc for the C/C++ part and the PGI
Fortran compilers, if you choose that option. I'd suggest compiling
OpenMPI in the same way as that has resolved our various issues. Have
you tried that with the same result?
Gerry Creager wrote:
Howdy,
I'm new to this list.
Howdy,
I'm new to this list. I've done a little review but likely missed
something specific to what I'm asking. I'll keep looking but need to
resolve this soon.
I'm running a Rocks cluster (centos 5), with PGI 7.2-3 compilers,
Myricom MX2 hardware and drivers, and OpenMPI1.3
I installed
What iWARP hardware are you using?
I only tested with Chelsio T3 iWARP hardware before v1.3 was launched;
I tested with Intel (NetEffect) 020's after v1.3 was launched and
found that their driver in OFED v1.4.0 does not handle RDMA CM REJECT
messages correctly. I have not yet tested with a
Hi all,
I successfully installed OpenMPI-1.3. I am trying to run OpenMPI over iWARP.
But I am getting error
RDMA_CM_EVENT_CONNECT_ERROR
I tried to run with more debug messages
mpirun --mca orte_base_help_aggregate 0 -np 2 -display-map -v -host
100.168.54.49,100.168.54.50
/usr/mpi/gcc/openmpi-1.3/
Your config.log looks incomplete -- it failed saying that your C and C+
+ compilers were incompatible with each other.
This does not seem related to what you described -- are you sure
you're sending the right config.log?
Specifically, can you send all the information listed here:
http:/
On Feb 14, 2009, at 2:42 AM, Francesco Pietra wrote:
I am trying to run a computational code (gamess us) on a parallel
UMA-type machine with all cpus on one node.
This code uses a supporting interface on TCP/IP stack and it is
advised that trying mpi support is matter for great experts, which I
Hmm. I'm unfortunately unable to replicate your results -- I get the
same valgrind output with your test program regardless of whether I
use gfortran or mpif90 (i.e., it shows the lost block). :-\
FWIW, I'm using RHEL4U4 (and U6) with gfortran 4.1.0 and Valgrind 3.4.0.
On Feb 14, 2009, at
Can you also send config.log and all the info described here:
http://www.open-mpi.org/community/help/
On Feb 18, 2009, at 12:43 PM, -Gim wrote:
I have attached the ./configure. The error is " configure: error:
Could not determine size of INTEGER*8"
Cheerio,
Viv
28 matches
Mail list logo