Hello,
I used cc to compile. I tried to use mpicc/mpif90 to compile PETSC, but
it changed nothing.
I still have the same error.
I'm giving you the whole compile proccess :
4440p-jobic% gmake solv_ksp
mpicc -o solv_ksp.o -c -fPIC -m64 -I/opt/lib/petsc
-I/opt/lib/petsc/bmake/amd-64-openmpi_no_d
Hi All,
I wanted to switch from mpich2/mvapich2 to OpenMPI, as OpenMPI
supports both ethernet and infiniband. Before doing that I tested an
application 'GROMACS' to compare the performance of MPICH2 & OpenMPI. Both
have been compiled with GNU compilers.
After this benchmark, I came to know
I would be interested in what others have to say about this as well.
We have been doing a bit of performance testing since we are deploying a
new cluster and it is our first InfiniBand based set up.
In our experience, so far, OpenMPI is coming out faster than MVAPICH.
Comparisons were made with d
Yann,
Your whole compile process in your email below shows you using mpicc to
link your executable. Can you please try and do the following for
linkage instead?
mpif90 -fPIC -m64 -o solv_ksp solv_ksp.o
-R/opt/lib/petsc/lib/amd-64-openmpi_no_debug
-L/opt/lib/petsc/lib/amd-64-openmpi_no_de
Your doing this on just one node? That would be using the OpenMPI SM
transport, Last I knew it wasn't that optimized though should still
be much faster than TCP.
I am surpised at your result though I do not have MPICH2 on the
cluster right now I don't have time to compare.
How did you r
On Oct 8, 2008, at 9:10 AM, Sangamesh B wrote:
I wanted to switch from mpich2/mvapich2 to OpenMPI, as
OpenMPI supports both ethernet and infiniband. Before doing that I
tested an application 'GROMACS' to compare the performance of MPICH2
& OpenMPI. Both have been compiled with GNU co
Hi,
my experience is that OpenMPI has slightly less latency and less
bandwidth than Intel MPI (which is based on mpich2) using InfiniBand.
I don't remember the numbers using shared memory.
As you are seeing a huge difference, I would suspect that either
something with your compilation is stra
On Wed, Oct 8, 2008 at 7:09 PM, Brock Palen wrote:
> Your doing this on just one node? That would be using the OpenMPI SM
> transport, Last I knew it wasn't that optimized though should still be much
> faster than TCP.
>
its on 2 nodes. I'm using TCP only. There is no infiniband hardware.
>
>
Hello,
I just tried to link with mpif90. And that's working! I don't have the
warning.
(the small change from your command : PIC, not fPIC)
I'm trying to compile PETSC with the new linker.
How come we don't have the warning ?
Thanks,
Yann
Terry Dontje wrote:
Yann,
Your whole compile pro
On Wed, Oct 8, 2008 at 7:16 PM, Jeff Squyres wrote:
> On Oct 8, 2008, at 9:10 AM, Sangamesh B wrote:
>
>I wanted to switch from mpich2/mvapich2 to OpenMPI, as OpenMPI
>> supports both ethernet and infiniband. Before doing that I tested an
>> application 'GROMACS' to compare the performanc
FYI attached here OpenMPI install details
On Wed, Oct 8, 2008 at 7:56 PM, Sangamesh B wrote:
>
>
> On Wed, Oct 8, 2008 at 7:16 PM, Jeff Squyres wrote:
>
>> On Oct 8, 2008, at 9:10 AM, Sangamesh B wrote:
>>
>>I wanted to switch from mpich2/mvapich2 to OpenMPI, as OpenMPI
>>> supports bot
On Oct 8, 2008, at 10:26 AM, Sangamesh B wrote:
- What version of Open MPI are you using? Please send the
information listed here:
1.2.7
http://www.open-mpi.org/community/help/
- Did you specify to use mpi_leave_pinned?
No
Use "--mca mpi_leave_pinned 1" on your mpirun command line (I don
On Wed, 2008-10-08 at 09:46 -0400, Jeff Squyres wrote:
> - Have you tried compiling Open MPI with something other than GCC?
> Just this week, we've gotten some reports from an OMPI member that
> they are sometimes seeing *huge* performance differences with OMPI
> compiled with GCC vs. any ot
Jeff,
You probably already know this but the obvious candidate here is the
memcpy() function, icc sticks in it's own which in some cases is much
better than the libc one. It's unusual for compilers to have *huge*
differences from code optimisations alone.
I know this is off topic, but I was i
Yann,
Well, when you use f90 to link it passed the linker the -t option which is
described in the manpage with the following:
Turns off the warning for multiply-defined symbols that
have different sizes or different alignments.
That's why :-)
To your original question should y
On Oct 8, 2008, at 10:58 AM, Ashley Pittman wrote:
You probably already know this but the obvious candidate here is the
memcpy() function, icc sticks in it's own which in some cases is much
better than the libc one. It's unusual for compilers to have *huge*
differences from code optimisations a
Hi,
I am having a problem about "Open" Macro's number of arguments, when I
try to build a C++ code with the openmpi-1.2.7 on my Mac OS 10.5.5
machine. The error message is given below. When I look at the file.h
and file_inln.h header files in the cxx folder, I am seeing that the
"Open" fu
On Wed, Oct 8, 2008 at 21:19, Sudhakar Mahalingam wrote:
> I am having a problem about "Open" Macro's number of arguments, when I try
> to build a C++ code with the openmpi-1.2.7 on my Mac OS 10.5.5 machine. The
> error message is given below. When I look at the file.h and file_inln.h
> header fil
On Mon, Oct/06/2008 12:24:48PM, Ray Muno wrote:
> Ethan Mallove wrote:
>
> >> Now I get farther along but the build fails at (small excerpt)
> >>
> >> mutex.c:(.text+0x30): multiple definition of `opal_atomic_cmpset_32'
> >> asm/.libs/libasm.a(asm.o):asm.c:(.text+0x30): first defined here
> >> thr
Sangamesh B wrote:
I wanted to switch from mpich2/mvapich2 to OpenMPI, as OpenMPI
supports both ethernet and infiniband. Before doing that I tested an
application 'GROMACS' to compare the performance of MPICH2 & OpenMPI.
Both have been compiled with GNU compilers.
After this benchmark, I cam
Hi guys,
[From Eugene Loh:]
> OpenMPI - 25 m 39 s.
>> MPICH2 - 15 m 53 s.
>>
> With regards to your issue, do you have any indication when you get that
> 25m39s timing if there is a grotesque amount of time being spent in MPI
> calls? Or, is the slowdown due to non-MPI portions?
Just to ad
One thing to look for is the process distribution. Based on the
application communication pattern, the process distribution can have a
tremendous impact on the execution time. Imagine that the application
split the processes in two equal groups based on the rank and only
communicate in each
Make sure you don't use a "debug" build of Open MPI. If you use trunk,
the build system detects it and turns on debug by default. It really
kills performance. --disable-debug will remove all those nasty printfs
from the critical path.
You can also run a simple ping-pong test (Netpipe is a g
Jed,
You are correct. I found an "Open" macro defined in our another header
file which was included before the mpi header files (Actually this
order was working fine with the mpich-1.2.7 but both openmpi-1.2.7 and
MPICH-2 complained and threw errors to me). Now when I change the
order of
Eugene Loh wrote:
Sangamesh B wrote:
The job is run on 2 nodes - 8 cores.
OpenMPI - 25 m 39 s.
MPICH2 - 15 m 53 s.
I don't understand MPICH very well, but it seemed as though some of
the flags used in building MPICH are supposed to be added in
automatically to the mpicc/etc compiler wra
On Oct 8, 2008, at 5:25 PM, Aurélien Bouteiller wrote:
Make sure you don't use a "debug" build of Open MPI. If you use
trunk, the build system detects it and turns on debug by default. It
really kills performance. --disable-debug will remove all those
nasty printfs from the critical path.
I have configured with the additional flags(--enable-ft-thread
--enable-mpi-threads) but there is no change in behaviour, it still
gives seg fault.
open mpi version:
Open MPI: 1.3a1r19685
blcr version:
version 0.7.3
The core file is attached.
hello.c is sample mpi program whose core is dumped is
27 matches
Mail list logo