Re: [OMPI users] Problem with HPL while using OpenMPI 1.3.3

2010-01-05 Thread Gus Correa
Hi Ilya 1) The only thing that stands out as very different from what I do here is your configuration flag "--enable-mpi-threads". Maybe some OpenMPI pro/developer could shed some light about this, whether that flag could be a potential source for the errors you see. Considering that when you s

Re: [OMPI users] Problem with HPL while using OpenMPI 1.3.3

2010-01-05 Thread ilya zelenchuk
Happy New Year, Gus! Yes, I'm using affinity. This is my openmpi-mca-params.conf file: --- # Use RSH instead SSH pls_rsh_agent=rsh # Turning on processor affinity mpi_paffinity_alone=1 # Include using eth1. btl_tcp_if_include=eth1 # Exclude using lo and eth0. btl_tcp_if_exclude=lo,eth0 --- I r

Re: [OMPI users] Problem with HPL while using OpenMPI 1.3.3

2009-12-30 Thread Gus Correa
Hi Ilya Well, many possibilities to explain the error have been discarded already. Another long shot: Have you tried to set the processor affinity? Not sure how it would work on Xeons. We have AMD processors here, and setting processor affinity does help performance, on HPL and other programs. I

Re: [OMPI users] Problem with HPL while using OpenMPI 1.3.3

2009-12-30 Thread ilya zelenchuk
Hey! 2009/12/29 Gus Correa : > Hi Ilya > > OK, with 28 nodes and 4GB/node, > you have much more memory than I thought. > The maximum N is calculated based on the total memory > you have (assuming the nodes are homogeneous, have the same RAM), > not based on the memory per node. Yep. I know. I've p

Re: [OMPI users] Problem with HPL while using OpenMPI 1.3.3

2009-12-29 Thread Gus Correa
Hi Ilya OK, with 28 nodes and 4GB/node, you have much more memory than I thought. The maximum N is calculated based on the total memory you have (assuming the nodes are homogeneous, have the same RAM), not based on the memory per node. I haven't tried OpenMPI 1.3.3. Last I ran HPL was with OpenM

Re: [OMPI users] Problem with HPL while using OpenMPI 1.3.3

2009-12-29 Thread ilya zelenchuk
Hello, Gus! Sorry for the lack of debug info. I have 28 nodes. Each node have 2 processors Xeon 2.4 GHz with 4 Gb RAM. OpenMPI 1.3.3 was compiled as: CC=icc CFLAGS=" -O3" CXX=icpc CXXFLAGS=" -O3" F77=ifort FFLAGS=" -O3" FC=ifort FCFLAGS=" -O3" ./configure --prefix=/opt/openmpi/intel/ --enable-debu

Re: [OMPI users] Problem with HPL while using OpenMPI 1.3.3

2009-12-28 Thread Gus Correa
Hi Ilya Did you recompile HPL with OpenMPI, or just launched the MPICH2 executable with the OpenMPI mpiexec? You probably know this, but you cannot mix different MPIs at compile and run time. Also, the HPL maximum problem size (N) depends on how much memory/RAM you have. If you make N too big, t

[OMPI users] Problem with HPL while using OpenMPI 1.3.3

2009-12-28 Thread ilya zelenchuk
Good day, everyone! I have problem while running HPL benchmark with OPENMPI 1.3.3. When problem size (Ns) smaller 1 - all is good. But when I set Ns to 17920 (for example) - I get errors: === [ums1:05086] ../../ompi/datatype/datatype_pack.h:37 Pointer 0xb27752c0 size 4032 is outside [