Re: [OMPI users] ompi-restart issue : ompi-restart doesn't work across nodes - possible installation problem or environment setting problem??

2008-10-09 Thread arun dhakne
These are the bt's of 2 cores .. gdb hello core.14653 #0 0x00300bc0cbc0 in ?? () #1 0x2b09d0fb in ?? () #2 0x7fff6a782920 in ?? () #3 0x2ae3d348 in ?? () #4 0x7fff6a7827b0 in ?? () #5 0x003806e6bcb4 in ?? () #6 0x in ?? () gdb hello core.146

[OMPI users] --enable-static --enable-shared using intel compilers

2008-10-09 Thread Rene Salmon
Hi, I am trying to compile openmpi-1.2.7 and get both the static and shared mpi libs using the intel compilers. Here is my configure line: ./configure CFLAGS="-static-intel" CXXFLAGS="-static-intel" FFLAGS="-static-intel" FCFLAGS="-static-intel" CC=icc CXX=icpc F77=ifort FC=ifort --enable-shared

Re: [OMPI users] build failed using intel compilers on mac os x

2008-10-09 Thread Jeff Squyres
The CXX compiler should be icpc, not icc. On Oct 7, 2008, at 11:08 AM, Massimo Cafaro wrote: Dear all, I tried to build the latest v1.2.7 open-mpi version on Mac OS X 10.5.5 using the intel c, c++ and fortran compilers v10.1.017 (the latest ones released by intel). Before starting the b

[OMPI users] SGE tight integration and ?tm? protocol for start

2008-10-09 Thread Sean Davis
I am relatively new to OpenMPI and Sun Grid Engine parallel integration. I have a small cluster that is running SGE6.2 on linux machines all using Intel Xeon processors. I have installed OpenMPI 1.2.7 from source using the --with-sge switch. Now, I am trying to troubleshoot some problems I am ha

Re: [OMPI users] Problem launching onto Bourne shell

2008-10-09 Thread Jeff Squyres
FWIW, the fix has been pushed into the trunk, 1.2.8, and 1.3 SVN branches. So I'll probably take down the hg tree (we use those as temporary branches). On Oct 9, 2008, at 2:32 PM, Hahn Kim wrote: Hi, Thanks for providing a fix, sorry for the delay in response. Once I found out about -x

Re: [OMPI users] Problem launching onto Bourne shell

2008-10-09 Thread Hahn Kim
Hi, Thanks for providing a fix, sorry for the delay in response. Once I found out about -x, I've been busy working on the rest of our code, so I haven't had the time to try out the fix. I'll take a look at it soon as I can and will let you know how it works out. Hahn On Oct 7, 2008, at

Re: [OMPI users] ompi-restart issue : ompi-restart doesn't work across nodes - possible installation problem or environment setting problem??

2008-10-09 Thread Josh Hursey
I cannot interpret the raw core files since they are specific your system and setup. Can you run it through gdb and get a backtrace? "gdb hello core.1234" then use the 'bt' command from inside gdb. That will help me start to focus in on the problem. Cheers, Josh On Oct 8, 2008, at 10:22 PM,

Re: [OMPI users] Performance: MPICH2 vs OpenMPI

2008-10-09 Thread Anthony Chan
- "Brian Dobbins" wrote: > OpenMPI : 120m 6s > MPICH2 : 67m 44s > > That seems to indicate that something else is going on -- with -np 1, > there should be no MPI communication, right? I wonder if the memory > allocator performance is coming into play here. If the app sends message to its

Re: [OMPI users] Performance: MPICH2 vs OpenMPI

2008-10-09 Thread Terry Frankcombe
>I'm rusty on my GCC, too, though - does it default to an O2 > level, or does it default to no optimizations? Default gcc is indeed no optimisation. gcc seems to like making users type really long complicated command lines even more than OpenMPI does. (Yes yes, I know! Don't tell me!)

Re: [OMPI users] Performance: MPICH2 vs OpenMPI

2008-10-09 Thread Eugene Loh
Brian Dobbins wrote: On Thu, Oct 9, 2008 at 10:13 AM, Jeff Squyres wrote: On Oct 9, 2008, at 8:06 AM, Sangamesh B wrote: OpenMPI : 120m 6s MPICH2 :  67m 44s That seems to indicate that something else is going on -- with -np 1, there

Re: [OMPI users] Performance: MPICH2 vs OpenMPI

2008-10-09 Thread Brian Dobbins
On Thu, Oct 9, 2008 at 10:13 AM, Jeff Squyres wrote: > On Oct 9, 2008, at 8:06 AM, Sangamesh B wrote: > >> OpenMPI : 120m 6s >> MPICH2 : 67m 44s >> > > That seems to indicate that something else is going on -- with -np 1, there > should be no MPI communication, right? I wonder if the memory all

Re: [OMPI users] Performance: MPICH2 vs OpenMPI

2008-10-09 Thread Jeff Squyres
On Oct 9, 2008, at 8:06 AM, Sangamesh B wrote: I've tested GROMACS for a single process (mpirun -np 1): Here are the results: OpenMPI : 120m 6s MPICH2 : 67m 44s That seems to indicate that something else is going on -- with -np 1, there should be no MPI communication, right? I wonder if

Re: [OMPI users] Performance: MPICH2 vs OpenMPI

2008-10-09 Thread Brock Palen
Which benchmark did you use? Brock Palen www.umich.edu/~brockp Center for Advanced Computing bro...@umich.edu (734)936-1985 On Oct 9, 2008, at 8:06 AM, Sangamesh B wrote: On Thu, Oct 9, 2008 at 5:40 AM, Jeff Squyres wrote: On Oct 8, 2008, at 5:25 PM, Aurélien Bouteiller wrote: Make su

Re: [OMPI users] Performance: MPICH2 vs OpenMPI

2008-10-09 Thread Sangamesh B
On Thu, Oct 9, 2008 at 5:40 AM, Jeff Squyres wrote: > On Oct 8, 2008, at 5:25 PM, Aurélien Bouteiller wrote: > > Make sure you don't use a "debug" build of Open MPI. If you use trunk, the >> build system detects it and turns on debug by default. It really kills >> performance. --disable-debug wi

Re: [OMPI users] Performance: MPICH2 vs OpenMPI

2008-10-09 Thread Sangamesh B
On Thu, Oct 9, 2008 at 2:39 AM, Brian Dobbins wrote: > > Hi guys, > > [From Eugene Loh:] > >> OpenMPI - 25 m 39 s. >>> MPICH2 - 15 m 53 s. >>> >> With regards to your issue, do you have any indication when you get that >> 25m39s timing if there is a grotesque amount of time being spent in MPI >