Hi Fabian,
On a separate topic, but related to your post here, how did you do the
timing? [Especially to so many digits of accuracy. :-) ]
I will have to time my program and I don't think /usr/bin/time would do
it. Are the numbers it report accurate [for an MPI program]? I think
the "us
Hello,
I would like to use mpi with cmake. I have created a file called
FindMPI.cmake which I enclose here.
When I type:
cmake ../hell
I obtain:
-- Found Boost: /usr/include
-- Found Xerces: /usr/lib/libxerces-c.so
-- Found MPI: /usr/lib/libmpi.so
-- Found LibXml++:
/usr/lib/libxml++-2.6.so
Hello,
Finally I have solved the problem.
Thanks a lot.
Sofia
- Original Message -
From: "Sofia Aparicio Secanellas"
To: "Open MPI Users"
Sent: Tuesday, November 11, 2008 9:51 AM
Subject: [OMPI users] Problems using mpi with cmake
Hello,
I would like to use mpi with cmake. I ha
Hi Ray,
> On a separate topic, but related to your post here, how did you do
> the timing? [Especially to so many digits of accuracy. :-) ]
two things to consider:
i) What do I actually (want to) measure?
ii) How accurate can I do that?
i)
Option iA) execution time of the whole program
One c
On Nov 10, 2008, at 8:21 PM, Oleg V. Zhylin wrote:
Are you saying that you have libmpi_f90.so available and
when you try to run, you get missing symbol errors? Or are
you failing to compile/link at all?
Linking stage fails. When I use mpif90 to produce actual executable
ld reports error th
Hmm. I'm unable to replicate this error. :-(
Is there any chance that you have some stale OMPI libraries (or OMPI
libraries from any other OMPI version) that are accidentally being
found by ompi_info?
On Nov 10, 2008, at 10:18 PM, Robert Kubrick wrote:
I rebuilt without the memory manag
Yes, you're right, '/usr/libs' was just a typo. Below is the detailed
reproduction on Fedora Core 9 x86_64.
yum install openmpi*
This installs 3 rpms from yum repository
openmpi x86_64 1.2.4-2.fc9 fedora 127 k
openmpi-devel x86_64 1.2.4-
On Nov 11, 2008, at 11:40 AM, Oleg V. Zhylin wrote:
mpif90 -g -pg -CB -traceback --static -fno-range-check -L/usr/lib64/
openmpi/1.2.4-gcc -c forests.f90
gfortran: unrecognized option '-CB'
gfortran: unrecognized option '-traceback'
mpif90 -g -pg -CB -traceback --static -fno-range-check -L/usr/
We have recently installed the Intel 10,1 compiler suite on our cluster.
I built OpenMPI (1.2.7 and 1.2.8) with
./configure CC=icc CXX=icpc F77=ifort FC=ifort
It configures, builds and installs.
However, the MPI compiler drivers (mpicc, mpif90, etc) fail immediately
with error of the sort
m
Ray Muno wrote:
I updated the LD_LIBRARY_PATH to point to the directories that contain
the installed copies of libimf.so. (this is not something I have not had
to do for other compiler/OpenMpi combinations)
How about...
(this is not something I have had to do for other compiler/OpenMpi
co
Hi Ray and list
I have Intel ifort 10.1.017 on a Rocks 4.3 cluster.
The OpenMPI compiler wrappers (i.e. "opal_wrapper") work fine,
and find the shared libraries (Intel or other) without a problem.
My guess is that this is not an OpenMPI problem, but an Intel compiler
environment glitch.
I wonde
Gus Correa wrote:
Hi Ray and list
I have Intel ifort 10.1.017 on a Rocks 4.3 cluster.
The OpenMPI compiler wrappers (i.e. "opal_wrapper") work fine,
and find the shared libraries (Intel or other) without a problem.
My guess is that this is not an OpenMPI problem, but an Intel compiler
environm
- "Ray Muno" wrote:
> Gus Correa wrote:
> > Hi Ray and list
> >
> > I have Intel ifort 10.1.017 on a Rocks 4.3 cluster.
> > The OpenMPI compiler wrappers (i.e. "opal_wrapper") work fine,
> > and find the shared libraries (Intel or other) without a problem.
> >
> > My guess is that this is
Steve Jones wrote:
Are you adding -i_dynamic to base flags, or something different?
Steve
I brought this up to see if something should be changed with the install,
For now, I am leaving that to users.
--
Ray Muno
See http://www.open-mpi.org/faq/?category=mpi-apps#override-wrappers-after-v1.0
.
On Nov 11, 2008, at 2:40 PM, Ray Muno wrote:
Steve Jones wrote:
Are you adding -i_dynamic to base flags, or something different?
Steve
I brought this up to see if something should be changed with the
insta
Jeff Squyres wrote:
See
http://www.open-mpi.org/faq/?category=mpi-apps#override-wrappers-after-v1.0.
OK, that tells me lots of things ;-)
Should I be running configure with --with-wrapper-cflags,
--with-wrapper-fflags etc,
set to include
-i_dynamic
--
Ray Muno
On Nov 11, 2008, at 2:54 PM, Ray Muno wrote:
See http://www.open-mpi.org/faq/?category=mpi-apps#override-wrappers-after-v1.0
.
OK, that tells me lots of things ;-)
Should I be running configure with --with-wrapper-cflags,
--with-wrapper-fflags etc,
set to include
-i_dynamic
If you want to
Jeff Squyres wrote:
On Nov 11, 2008, at 2:54 PM, Ray Muno wrote:
See
http://www.open-mpi.org/faq/?category=mpi-apps#override-wrappers-after-v1.0.
OK, that tells me lots of things ;-)
Should I be running configure with --with-wrapper-cflags,
--with-wrapper-fflags etc,
set to include
-i_dy
Hi Ray and list
One solution that I also use is to compile with the "-static-intel" flag
(or pass it to the MPI wrappers).
This will link *only* the Intel libraries statically,
the other shared libraries (mostly GNU) continue to be dynamically linked.
PGI compilers have a similar flag, "-Bstati
19 matches
Mail list logo