Re: [OMPI users] Performance: MPICH2 vs OpenMPI

2008-10-25 Thread Sangamesh B
On Sat, Oct 25, 2008 at 12:33 PM, Sangamesh B wrote: > On Fri, Oct 24, 2008 at 11:26 PM, Eugene Loh wrote: >> Sangamesh B wrote: >> >>> I reinstalled all softwares with -O3 optimization. Following are the >>> performance numbers for a 4 process job on a single node: >>> >>> MPICH2: 26 m 54 s

Re: [OMPI users] Performance: MPICH2 vs OpenMPI

2008-10-25 Thread Sangamesh B
On Fri, Oct 24, 2008 at 11:26 PM, Eugene Loh wrote: > Sangamesh B wrote: > >> I reinstalled all softwares with -O3 optimization. Following are the >> performance numbers for a 4 process job on a single node: >> >> MPICH2: 26 m 54 s >> OpenMPI: 24 m 39 s > > I'm not sure I'm following. OMPI

Re: [OMPI users] Performance: MPICH2 vs OpenMPI

2008-10-24 Thread Eugene Loh
Sangamesh B wrote: I reinstalled all softwares with -O3 optimization. Following are the performance numbers for a 4 process job on a single node: MPICH2: 26 m 54 s OpenMPI: 24 m 39 s I'm not sure I'm following. OMPI is faster here, but is that a result of MPICH2 slowing down? The or

Re: [OMPI users] Performance: MPICH2 vs OpenMPI

2008-10-15 Thread Rajeev Thakur
For MPICH2 1.0.7, configure with --with-device=ch3:nemesis. That will use shared memory within a node unlike ch3:sock which uses TCP. Nemesis is the default in 1.1a1. Rajeev > Date: Wed, 15 Oct 2008 18:21:17 +0530 > From: "Sangamesh B" > Subject: Re: [OMPI users] Performance

Re: [OMPI users] Performance: MPICH2 vs OpenMPI

2008-10-15 Thread Sangamesh B
On Fri, Oct 10, 2008 at 10:40 PM, Brian Dobbins wrote: > > Hi guys, > > On Fri, Oct 10, 2008 at 12:57 PM, Brock Palen wrote: > >> Actually I had a much differnt results, >> >> gromacs-3.3.1 one node dual core dual socket opt2218 openmpi-1.2.7 >> pgi/7.2 >> mpich2 gcc >> > >For some reason

Re: [OMPI users] Performance: MPICH2 vs OpenMPI

2008-10-10 Thread Brian Dobbins
Hi guys, On Fri, Oct 10, 2008 at 12:57 PM, Brock Palen wrote: > Actually I had a much differnt results, > > gromacs-3.3.1 one node dual core dual socket opt2218 openmpi-1.2.7 > pgi/7.2 > mpich2 gcc > For some reason, the difference in minutes didn't come through, it seems, but I would gue

Re: [OMPI users] Performance: MPICH2 vs OpenMPI

2008-10-10 Thread Brock Palen
Whoops didn't include the mpich2 numbers, 20M mpich2 same node, Brock Palen www.umich.edu/~brockp Center for Advanced Computing bro...@umich.edu (734)936-1985 On Oct 10, 2008, at 12:57 PM, Brock Palen wrote: Actually I had a much differnt results, gromacs-3.3.1 one node dual core dual so

Re: [OMPI users] Performance: MPICH2 vs OpenMPI

2008-10-10 Thread Brock Palen
Actually I had a much differnt results, gromacs-3.3.1 one node dual core dual socket opt2218 openmpi-1.2.7 pgi/7.2 mpich2 gcc 19M OpenMPI M Mpich2 So for me OpenMPI+pgi was faster, I don't know how you got such a low mpich2 number. On the other hand if you do this preprocess befo

Re: [OMPI users] Performance: MPICH2 vs OpenMPI

2008-10-10 Thread Sangamesh B
On Thu, Oct 9, 2008 at 7:30 PM, Brock Palen wrote: > Which benchmark did you use? > Out of 4 benchmarks I used d.dppc benchmark. > > Brock Palen > www.umich.edu/~brockp > Center for Advanced Computing > bro...@umich.edu > (734)936-1985 > > > > On Oct 9, 2008, at

Re: [OMPI users] Performance: MPICH2 vs OpenMPI

2008-10-09 Thread Anthony Chan
- "Brian Dobbins" wrote: > OpenMPI : 120m 6s > MPICH2 : 67m 44s > > That seems to indicate that something else is going on -- with -np 1, > there should be no MPI communication, right? I wonder if the memory > allocator performance is coming into play here. If the app sends message to its

Re: [OMPI users] Performance: MPICH2 vs OpenMPI

2008-10-09 Thread Terry Frankcombe
>I'm rusty on my GCC, too, though - does it default to an O2 > level, or does it default to no optimizations? Default gcc is indeed no optimisation. gcc seems to like making users type really long complicated command lines even more than OpenMPI does. (Yes yes, I know! Don't tell me!)

Re: [OMPI users] Performance: MPICH2 vs OpenMPI

2008-10-09 Thread Eugene Loh
Brian Dobbins wrote: On Thu, Oct 9, 2008 at 10:13 AM, Jeff Squyres wrote: On Oct 9, 2008, at 8:06 AM, Sangamesh B wrote: OpenMPI : 120m 6s MPICH2 :  67m 44s That seems to indicate that something else is going on -- with -np 1, there

Re: [OMPI users] Performance: MPICH2 vs OpenMPI

2008-10-09 Thread Brian Dobbins
On Thu, Oct 9, 2008 at 10:13 AM, Jeff Squyres wrote: > On Oct 9, 2008, at 8:06 AM, Sangamesh B wrote: > >> OpenMPI : 120m 6s >> MPICH2 : 67m 44s >> > > That seems to indicate that something else is going on -- with -np 1, there > should be no MPI communication, right? I wonder if the memory all

Re: [OMPI users] Performance: MPICH2 vs OpenMPI

2008-10-09 Thread Jeff Squyres
On Oct 9, 2008, at 8:06 AM, Sangamesh B wrote: I've tested GROMACS for a single process (mpirun -np 1): Here are the results: OpenMPI : 120m 6s MPICH2 : 67m 44s That seems to indicate that something else is going on -- with -np 1, there should be no MPI communication, right? I wonder if

Re: [OMPI users] Performance: MPICH2 vs OpenMPI

2008-10-09 Thread Brock Palen
Which benchmark did you use? Brock Palen www.umich.edu/~brockp Center for Advanced Computing bro...@umich.edu (734)936-1985 On Oct 9, 2008, at 8:06 AM, Sangamesh B wrote: On Thu, Oct 9, 2008 at 5:40 AM, Jeff Squyres wrote: On Oct 8, 2008, at 5:25 PM, Aurélien Bouteiller wrote: Make su

Re: [OMPI users] Performance: MPICH2 vs OpenMPI

2008-10-09 Thread Sangamesh B
On Thu, Oct 9, 2008 at 5:40 AM, Jeff Squyres wrote: > On Oct 8, 2008, at 5:25 PM, Aurélien Bouteiller wrote: > > Make sure you don't use a "debug" build of Open MPI. If you use trunk, the >> build system detects it and turns on debug by default. It really kills >> performance. --disable-debug wi

Re: [OMPI users] Performance: MPICH2 vs OpenMPI

2008-10-09 Thread Sangamesh B
On Thu, Oct 9, 2008 at 2:39 AM, Brian Dobbins wrote: > > Hi guys, > > [From Eugene Loh:] > >> OpenMPI - 25 m 39 s. >>> MPICH2 - 15 m 53 s. >>> >> With regards to your issue, do you have any indication when you get that >> 25m39s timing if there is a grotesque amount of time being spent in MPI >

Re: [OMPI users] Performance: MPICH2 vs OpenMPI

2008-10-08 Thread Jeff Squyres
On Oct 8, 2008, at 5:25 PM, Aurélien Bouteiller wrote: Make sure you don't use a "debug" build of Open MPI. If you use trunk, the build system detects it and turns on debug by default. It really kills performance. --disable-debug will remove all those nasty printfs from the critical path.

Re: [OMPI users] Performance: MPICH2 vs OpenMPI

2008-10-08 Thread Eugene Loh
Eugene Loh wrote: Sangamesh B wrote: The job is run on 2 nodes - 8 cores. OpenMPI - 25 m 39 s. MPICH2 - 15 m 53 s. I don't understand MPICH very well, but it seemed as though some of the flags used in building MPICH are supposed to be added in automatically to the mpicc/etc compiler wra

Re: [OMPI users] Performance: MPICH2 vs OpenMPI

2008-10-08 Thread Aurélien Bouteiller
Make sure you don't use a "debug" build of Open MPI. If you use trunk, the build system detects it and turns on debug by default. It really kills performance. --disable-debug will remove all those nasty printfs from the critical path. You can also run a simple ping-pong test (Netpipe is a g

Re: [OMPI users] Performance: MPICH2 vs OpenMPI

2008-10-08 Thread George Bosilca
One thing to look for is the process distribution. Based on the application communication pattern, the process distribution can have a tremendous impact on the execution time. Imagine that the application split the processes in two equal groups based on the rank and only communicate in each

Re: [OMPI users] Performance: MPICH2 vs OpenMPI

2008-10-08 Thread Brian Dobbins
Hi guys, [From Eugene Loh:] > OpenMPI - 25 m 39 s. >> MPICH2 - 15 m 53 s. >> > With regards to your issue, do you have any indication when you get that > 25m39s timing if there is a grotesque amount of time being spent in MPI > calls? Or, is the slowdown due to non-MPI portions? Just to ad

Re: [OMPI users] Performance: MPICH2 vs OpenMPI

2008-10-08 Thread Eugene Loh
Sangamesh B wrote: I wanted to switch from mpich2/mvapich2 to OpenMPI, as OpenMPI supports both ethernet and infiniband. Before doing that I tested an application 'GROMACS' to compare the performance of MPICH2 & OpenMPI. Both have been compiled with GNU compilers. After this benchmark, I cam

Re: [OMPI users] Performance: MPICH2 vs OpenMPI

2008-10-08 Thread Jeff Squyres
On Oct 8, 2008, at 10:58 AM, Ashley Pittman wrote: You probably already know this but the obvious candidate here is the memcpy() function, icc sticks in it's own which in some cases is much better than the libc one. It's unusual for compilers to have *huge* differences from code optimisations a

Re: [OMPI users] Performance: MPICH2 vs OpenMPI

2008-10-08 Thread Brock Palen
Jeff, You probably already know this but the obvious candidate here is the memcpy() function, icc sticks in it's own which in some cases is much better than the libc one. It's unusual for compilers to have *huge* differences from code optimisations alone. I know this is off topic, but I was i

Re: [OMPI users] Performance: MPICH2 vs OpenMPI

2008-10-08 Thread Ashley Pittman
On Wed, 2008-10-08 at 09:46 -0400, Jeff Squyres wrote: > - Have you tried compiling Open MPI with something other than GCC? > Just this week, we've gotten some reports from an OMPI member that > they are sometimes seeing *huge* performance differences with OMPI > compiled with GCC vs. any ot

Re: [OMPI users] Performance: MPICH2 vs OpenMPI

2008-10-08 Thread Jeff Squyres
On Oct 8, 2008, at 10:26 AM, Sangamesh B wrote: - What version of Open MPI are you using? Please send the information listed here: 1.2.7 http://www.open-mpi.org/community/help/ - Did you specify to use mpi_leave_pinned? No Use "--mca mpi_leave_pinned 1" on your mpirun command line (I don

Re: [OMPI users] Performance: MPICH2 vs OpenMPI

2008-10-08 Thread Sangamesh B
FYI attached here OpenMPI install details On Wed, Oct 8, 2008 at 7:56 PM, Sangamesh B wrote: > > > On Wed, Oct 8, 2008 at 7:16 PM, Jeff Squyres wrote: > >> On Oct 8, 2008, at 9:10 AM, Sangamesh B wrote: >> >>I wanted to switch from mpich2/mvapich2 to OpenMPI, as OpenMPI >>> supports bot

Re: [OMPI users] Performance: MPICH2 vs OpenMPI

2008-10-08 Thread Sangamesh B
On Wed, Oct 8, 2008 at 7:16 PM, Jeff Squyres wrote: > On Oct 8, 2008, at 9:10 AM, Sangamesh B wrote: > >I wanted to switch from mpich2/mvapich2 to OpenMPI, as OpenMPI >> supports both ethernet and infiniband. Before doing that I tested an >> application 'GROMACS' to compare the performanc

Re: [OMPI users] Performance: MPICH2 vs OpenMPI

2008-10-08 Thread Sangamesh B
On Wed, Oct 8, 2008 at 7:09 PM, Brock Palen wrote: > Your doing this on just one node? That would be using the OpenMPI SM > transport, Last I knew it wasn't that optimized though should still be much > faster than TCP. > its on 2 nodes. I'm using TCP only. There is no infiniband hardware. > >

Re: [OMPI users] Performance: MPICH2 vs OpenMPI

2008-10-08 Thread Samuel Sarholz
Hi, my experience is that OpenMPI has slightly less latency and less bandwidth than Intel MPI (which is based on mpich2) using InfiniBand. I don't remember the numbers using shared memory. As you are seeing a huge difference, I would suspect that either something with your compilation is stra

Re: [OMPI users] Performance: MPICH2 vs OpenMPI

2008-10-08 Thread Jeff Squyres
On Oct 8, 2008, at 9:10 AM, Sangamesh B wrote: I wanted to switch from mpich2/mvapich2 to OpenMPI, as OpenMPI supports both ethernet and infiniband. Before doing that I tested an application 'GROMACS' to compare the performance of MPICH2 & OpenMPI. Both have been compiled with GNU co

Re: [OMPI users] Performance: MPICH2 vs OpenMPI

2008-10-08 Thread Brock Palen
Your doing this on just one node? That would be using the OpenMPI SM transport, Last I knew it wasn't that optimized though should still be much faster than TCP. I am surpised at your result though I do not have MPICH2 on the cluster right now I don't have time to compare. How did you r

Re: [OMPI users] Performance: MPICH2 vs OpenMPI

2008-10-08 Thread Ray Muno
I would be interested in what others have to say about this as well. We have been doing a bit of performance testing since we are deploying a new cluster and it is our first InfiniBand based set up. In our experience, so far, OpenMPI is coming out faster than MVAPICH. Comparisons were made with d

[OMPI users] Performance: MPICH2 vs OpenMPI

2008-10-08 Thread Sangamesh B
Hi All, I wanted to switch from mpich2/mvapich2 to OpenMPI, as OpenMPI supports both ethernet and infiniband. Before doing that I tested an application 'GROMACS' to compare the performance of MPICH2 & OpenMPI. Both have been compiled with GNU compilers. After this benchmark, I came to know