These are the bt's of 2 cores ..
gdb hello core.14653
#0 0x00300bc0cbc0 in ?? ()
#1 0x2b09d0fb in ?? ()
#2 0x7fff6a782920 in ?? ()
#3 0x2ae3d348 in ?? ()
#4 0x7fff6a7827b0 in ?? ()
#5 0x003806e6bcb4 in ?? ()
#6 0x in ?? ()
gdb hello core.146
Hi,
I am trying to compile openmpi-1.2.7 and get both the static and shared
mpi libs using the intel compilers. Here is my configure line:
./configure CFLAGS="-static-intel" CXXFLAGS="-static-intel"
FFLAGS="-static-intel" FCFLAGS="-static-intel" CC=icc CXX=icpc F77=ifort
FC=ifort --enable-shared
The CXX compiler should be icpc, not icc.
On Oct 7, 2008, at 11:08 AM, Massimo Cafaro wrote:
Dear all,
I tried to build the latest v1.2.7 open-mpi version on Mac OS X
10.5.5 using the intel c, c++ and fortran compilers v10.1.017 (the
latest ones released by intel). Before starting the b
I am relatively new to OpenMPI and Sun Grid Engine parallel
integration. I have a small cluster that is running SGE6.2 on linux
machines all using Intel Xeon processors. I have installed OpenMPI
1.2.7 from source using the --with-sge switch. Now, I am trying to
troubleshoot some problems I am ha
FWIW, the fix has been pushed into the trunk, 1.2.8, and 1.3 SVN
branches. So I'll probably take down the hg tree (we use those as
temporary branches).
On Oct 9, 2008, at 2:32 PM, Hahn Kim wrote:
Hi,
Thanks for providing a fix, sorry for the delay in response. Once I
found out about -x
Hi,
Thanks for providing a fix, sorry for the delay in response. Once I
found out about -x, I've been busy working on the rest of our code, so
I haven't had the time to try out the fix. I'll take a look at it
soon as I can and will let you know how it works out.
Hahn
On Oct 7, 2008, at
I cannot interpret the raw core files since they are specific your
system and setup. Can you run it through gdb and get a backtrace? "gdb
hello core.1234" then use the 'bt' command from inside gdb.
That will help me start to focus in on the problem.
Cheers,
Josh
On Oct 8, 2008, at 10:22 PM,
- "Brian Dobbins" wrote:
> OpenMPI : 120m 6s
> MPICH2 : 67m 44s
>
> That seems to indicate that something else is going on -- with -np 1,
> there should be no MPI communication, right? I wonder if the memory
> allocator performance is coming into play here.
If the app sends message to its
>I'm rusty on my GCC, too, though - does it default to an O2
> level, or does it default to no optimizations?
Default gcc is indeed no optimisation. gcc seems to like making users
type really long complicated command lines even more than OpenMPI does.
(Yes yes, I know! Don't tell me!)
Brian Dobbins wrote:
On Thu, Oct 9, 2008 at 10:13 AM, Jeff
Squyres
wrote:
On Oct 9, 2008, at 8:06 AM, Sangamesh B wrote:
OpenMPI
: 120m 6s
MPICH2 : 67m 44s
That seems to indicate that something else is going on -- with -np 1,
there
On Thu, Oct 9, 2008 at 10:13 AM, Jeff Squyres wrote:
> On Oct 9, 2008, at 8:06 AM, Sangamesh B wrote:
>
>> OpenMPI : 120m 6s
>> MPICH2 : 67m 44s
>>
>
> That seems to indicate that something else is going on -- with -np 1, there
> should be no MPI communication, right? I wonder if the memory all
On Oct 9, 2008, at 8:06 AM, Sangamesh B wrote:
I've tested GROMACS for a single process (mpirun -np 1):
Here are the results:
OpenMPI : 120m 6s
MPICH2 : 67m 44s
That seems to indicate that something else is going on -- with -np 1,
there should be no MPI communication, right? I wonder if
Which benchmark did you use?
Brock Palen
www.umich.edu/~brockp
Center for Advanced Computing
bro...@umich.edu
(734)936-1985
On Oct 9, 2008, at 8:06 AM, Sangamesh B wrote:
On Thu, Oct 9, 2008 at 5:40 AM, Jeff Squyres
wrote:
On Oct 8, 2008, at 5:25 PM, Aurélien Bouteiller wrote:
Make su
On Thu, Oct 9, 2008 at 5:40 AM, Jeff Squyres wrote:
> On Oct 8, 2008, at 5:25 PM, Aurélien Bouteiller wrote:
>
> Make sure you don't use a "debug" build of Open MPI. If you use trunk, the
>> build system detects it and turns on debug by default. It really kills
>> performance. --disable-debug wi
On Thu, Oct 9, 2008 at 2:39 AM, Brian Dobbins wrote:
>
> Hi guys,
>
> [From Eugene Loh:]
>
>> OpenMPI - 25 m 39 s.
>>> MPICH2 - 15 m 53 s.
>>>
>> With regards to your issue, do you have any indication when you get that
>> 25m39s timing if there is a grotesque amount of time being spent in MPI
>
15 matches
Mail list logo