Re: [OMPI users] How to check OMPI is using IB or not?

2010-01-27 Thread Sangamesh B
Thanks Brett for the useful information. On Wed, Jan 27, 2010 at 12:40 PM, Brett Pemberton wrote: > > - "Sangamesh B" wrote: > > > Hi all, > > > > If an infiniband network is configured successfully, how to confirm > > that Open MPI is u

[OMPI users] How to check OMPI is using IB or not?

2010-01-27 Thread Sangamesh B
Hi all, If an infiniband network is configured successfully, how to confirm that Open MPI is using infiniband, not other ethernet network available? In earlier versions, I've seen if OMPI is running on ethernet, it was giving warning - its runnig on slower network. Is this available in 1.3

Re: [OMPI users] problem with progress thread and orte

2010-01-12 Thread Sangamesh B
Hi, What are the advantages with progress-threads feature? Thanks, Sangamesh On Fri, Jan 8, 2010 at 10:13 PM, Ralph Castain wrote: > Yeah, the system doesn't currently support enable-progress-threads. It is a > two-fold problem: ORTE won't work that way, and some parts of the MPI layer > w

[OMPI users] Is OpenMPI's orted = MPICH2's smpd?

2009-12-21 Thread Sangamesh B
Hi, MPICh2 has different process managers: MPD, SMPD, GFORKER etc. Is the Open MPI's startup daemon orted similar to MPICH2's smpd? Or something else? Thanks, Sangamesh

Re: [OMPI users] Job fails after hours of running on a specific node

2009-12-07 Thread Sangamesh B
Ibdiaget, it is open source IB > network diagnostic tool : > http://linux.die.net/man/1/ibdiagnet > The tool is part of OFED distribution. > > Pasha. > > > Sangamesh B wrote: > >> Dear all, >> The CPMD application which is compiled with OpenMPI-1.3 (Intel

[OMPI users] With IMPI works fine,With OMPI fails

2009-10-28 Thread Sangamesh B
Hi all, The compilation of a fortran application - CPMD-3.13.2 - with OpenMP + OpenMPI-1.3.3 + ifort-10.1 + MKL-10.0 is failing with following error on a Rocks-5.1 Linux cluster: /lib/cpp -P -C -traditional -D__Linux -D__PGI -DFFT_DEFAULT -DPOINTER8 -DLINUX_IFC -DPARALLEL -DMYRINET ./potfor

Re: [OMPI users] OMPI-1.2.0 is not getting installed

2009-10-20 Thread Sangamesh B
support team. http://software.intel.com/en-us/forums/intel-math-kernel-library/topic/69104/ Is it possible to get it installed? Thanks, Sangamesh > The current version is 1.3.3! > > Jody > > On Tue, Oct 20, 2009 at 2:48 PM, Sangamesh B wrote: > > Hi, > > > > I

[OMPI users] OMPI-1.2.0 is not getting installed

2009-10-20 Thread Sangamesh B
Hi, Its required here to install Open MPI 1.2 on a HPC cluster with - Cent OS 5.2 Linux, Mellanox IB card, switch and OFED-1.4. But the configure is failing with: [root@master openmpi-1.2]# ./configure --prefix=/opt/mpi/openmpi/1.2/intel --with-openib=/usr .. ... --- MCA component btl:openi

Re: [OMPI users] Openmpi not using IB and no warning message

2009-10-15 Thread Sangamesh B
n MPI processes. However, our runtime is still allowed to use TCP, and > this is what you see on your netstat. These are not performance critical > communications (i.e. only startup the job, distribute the contact > informations and so on). > > Have you run the IB tests to validate th

Re: [OMPI users] Openmpi not using IB and no warning message

2009-10-12 Thread Sangamesh B
Any hint for the previous mail? Does Open MPI-1.3.3 support only a limited versions of OFED? Or any version is ok? On Sun, Oct 11, 2009 at 3:55 PM, Sangamesh B wrote: > Hi, > > A fortran application is installed with Intel Fortran 10.1, MKL-10 and > Openmpi-1.3.3 on a Rocks-5

[OMPI users] Openmpi not using IB and no warning message

2009-10-11 Thread Sangamesh B
Hi, A fortran application is installed with Intel Fortran 10.1, MKL-10 and Openmpi-1.3.3 on a Rocks-5.1 HPC Linux cluster. The jobs are not scaling when more than one node is used. The cluster has Intel Quad core Xeon (E5472) @ 3.00GHz Dual processor (total 8 cores per node, 16GB RAM) and Infiniba

[OMPI users] job fails with "Signal: Bus error (7)"

2009-10-01 Thread Sangamesh B
Hi, A fortran application which is compiled with ifort-10.1 and open mpi 1.3.1 on Cent OS 5.2 fails after running 4 days with following error message: [compute-0-7:25430] *** Process received signal *** [compute-0-7:25433] *** Process received signal *** [compute-0-7:25433] Signal: Bus error

[OMPI users] Job fails after hours of running on a specific node

2009-09-20 Thread Sangamesh B
Dear all, The CPMD application which is compiled with OpenMPI-1.3 (Intel 10.1 compilers) on CentOS-4.5, fails only, when a specific node i.e. node-0-2 is involved. But runs well on other nodes. Initially job failed after 5-10 mins (on node-0-2 + some other nodes). After googling error,

Re: [OMPI users] Lower performance on a Gigabit node compared toinfiniband node

2009-03-12 Thread Sangamesh B
ould be the differentiating factors. > > The standard wat32 benchmark is a good test for a single node. You can find > our benchmarking results here if you want to compare yours > http://www.cse.scitech.ac.uk/disco/dbd/index.html > > Regards, > > INK > > 2009/3/10 Sangames

Re: [OMPI users] Lower performance on a Gigabit node compared toinfiniband node

2009-03-10 Thread Sangamesh B
> access patterns, particularly across UMA machines like clovertown and > follow-in intel architectures can really get bogged down by the RAM > bottlneck (all 8 cores hammering on memory simultaneously via a single > memory bus). > > > > On Mar 9, 2009, at 10:30 AM,

[OMPI users] Lower performance on a Gigabit node compared to infiniband node

2009-03-09 Thread Sangamesh B
Dear Open MPI team, With Open MPI-1.3, the fortran application CPMD is installed on Rocks-4.3 cluster - Dual Processor Quad core Xeon @ 3 GHz. (8 cores per node) Two jobs (4 processes job) are run on two nodes, separately - one node has a ib connection ( 4 GB RAM) and the other node has gi

Re: [OMPI users] Low performance of Open MPI-1.3 over Gigabit

2009-03-05 Thread Sangamesh B
difference between cpu time and elapsed time? Is >> your >> code doing any file IO or maybe waiting for one of the processors? Do you >> use >> non-blocking communication wherever possible? >> >> Regards, >> >> Mattijs >> >> On Wednesday

Re: [OMPI users] Low performance of Open MPI-1.3 over Gigabit

2009-03-04 Thread Sangamesh B
2.23 SECONDS No of nodes:6 cores used per node:4 total core: 6*4=24 CPU TIME :0 HOURS 51 MINUTES 50.41 SECONDS ELAPSED TIME :6 HOURS 6 MINUTES 38.67 SECONDS Any help/suggetsions to diagnose this problem. Thanks, Sangamesh On Wed, Feb 25, 2009 at 12:51 PM, Sangamesh B

Re: [OMPI users] Ompi runs thru cmd line but fails when run thru SGE

2009-02-26 Thread Sangamesh B
Hello Reuti, I'm sorry for the late response. On Mon, Jan 26, 2009 at 7:11 PM, Reuti wrote: > Am 25.01.2009 um 06:16 schrieb Sangamesh B: > >> Thanks Reuti for the reply. >> >> On Sun, Jan 25, 2009 at 2:22 AM, Reuti wrote: >>> >>> Am 24.01.2

[OMPI users] Low performance of Open MPI-1.3 over Gigabit

2009-02-25 Thread Sangamesh B
Dear All, A fortran application is installed with Open MPI-1.3 + Intel compilers on a Rocks-4.3 cluster with Intel Xeon Dual socket Quad core processor @ 3GHz (8cores/node). The time consumed for different tests over a Gigabit connected nodes are as follows: (Each node has 8 GB memory).

Re: [OMPI users] Fwd: [GE users] Open MPI job fails when run thru SGE

2009-02-02 Thread Sangamesh B
On Mon, Feb 2, 2009 at 12:15 PM, Reuti wrote: > Am 02.02.2009 um 05:44 schrieb Sangamesh B: > >> On Sun, Feb 1, 2009 at 10:37 PM, Reuti wrote: >>> >>> Am 01.02.2009 um 16:00 schrieb Sangamesh B: >>> >>>> On Sat, Jan 31, 2009 at 6:27 PM, Re

Re: [OMPI users] Fwd: [GE users] Open MPI job fails when run thru SGE

2009-02-01 Thread Sangamesh B
On Sun, Feb 1, 2009 at 10:37 PM, Reuti wrote: > Am 01.02.2009 um 16:00 schrieb Sangamesh B: > >> On Sat, Jan 31, 2009 at 6:27 PM, Reuti wrote: >>> >>> Am 31.01.2009 um 08:49 schrieb Sangamesh B: >>> >>>> On Fri, Jan 30, 2009 at 10:20 PM, Re

Re: [OMPI users] Fwd: [GE users] Open MPI job fails when run thru SGE

2009-02-01 Thread Sangamesh B
On Sat, Jan 31, 2009 at 6:27 PM, Reuti wrote: > Am 31.01.2009 um 08:49 schrieb Sangamesh B: > >> On Fri, Jan 30, 2009 at 10:20 PM, Reuti >> wrote: >>> >>> Am 30.01.2009 um 15:02 schrieb Sangamesh B: >>> >>>> Dear Open MPI, >>>&

Re: [OMPI users] Fwd: [GE users] Open MPI job fails when run thru SGE

2009-01-31 Thread Sangamesh B
On Fri, Jan 30, 2009 at 10:20 PM, Reuti wrote: > Am 30.01.2009 um 15:02 schrieb Sangamesh B: > >> Dear Open MPI, >> >> Do you have a solution for the following problem of Open MPI (1.3) >> when run through Grid Engine. >> >> I changed global exe

[OMPI users] Fwd: [GE users] Open MPI job fails when run thru SGE

2009-01-30 Thread Sangamesh B
Dear Open MPI, Do you have a solution for the following problem of Open MPI (1.3) when run through Grid Engine. I changed global execd params with H_MEMORYLOCKED=infinity and restarted the sgeexecd in all nodes. But still the problem persists: $cat err.77.CPMD-OMPI ssh_exchange_identification:

Re: [OMPI users] Ompi runs thru cmd line but fails when run thru SGE

2009-01-25 Thread Sangamesh B
s: between master & node: works fine but with some delay. between nodes: works fine, no delay >From command line the open mpi jobs were run with no error, even master node is not used in hostfile. Thanks, Sangamesh > -- Reuti > > >> Jeremy Stout >> >> On Sat, J

Re: [OMPI users] Ompi runs thru cmd line but fails when run thru SGE

2009-01-24 Thread Sangamesh B
ding "ulimit -l unlimited" near > the top of the SGE startup script on the computation nodes and > restarting SGE on every node. > > Jeremy Stout > > On Sat, Jan 24, 2009 at 6:06 AM, Sangamesh B wrote: >> Hello all, >> >> Open MPI 1.3 is installed on Rocks

[OMPI users] Ompi runs thru cmd line but fails when run thru SGE

2009-01-24 Thread Sangamesh B
Hello all, Open MPI 1.3 is installed on Rocks 4.3 Linux cluster with support of SGE i.e using --with-sge. But the ompi_info shows only one component: # /opt/mpi/openmpi/1.3/intel/bin/ompi_info | grep gridengine MCA ras: gridengine (MCA v2.0, API v2.0, Component v1.3) Is this ri

Re: [OMPI users] Cluster with IB hosts and Ethernet hosts

2009-01-23 Thread Sangamesh B
Any solution for the following problem? On Fri, Jan 23, 2009 at 7:58 PM, Sangamesh B wrote: > On Fri, Jan 23, 2009 at 5:41 PM, Jeff Squyres wrote: >> On Jan 22, 2009, at 11:26 PM, Sangamesh B wrote: >> >>> We''ve a cluster with 23 nodes connected to IB switch

Re: [OMPI users] Cluster with IB hosts and Ethernet hosts

2009-01-23 Thread Sangamesh B
On Fri, Jan 23, 2009 at 5:41 PM, Jeff Squyres wrote: > On Jan 22, 2009, at 11:26 PM, Sangamesh B wrote: > >> We''ve a cluster with 23 nodes connected to IB switch and 8 nodes >> have connected to ethernet switch. Master node is also connected to IB >> switc

[OMPI users] Cluster with IB hosts and Ethernet hosts

2009-01-22 Thread Sangamesh B
Hello all, We''ve a cluster with 23 nodes connected to IB switch and 8 nodes have connected to ethernet switch. Master node is also connected to IB switch. SGE(with tight integration, -pe orte) is used for parallel/serial job submission. Open MPI-1.3 is installed on master node with IB suppo

[OMPI users] HP CQ with status LOCAL LENGTH ERROR

2008-12-29 Thread Sangamesh B
Hello all, The MPI-Blast-PIO-1.5.0 is installed with Open MPI 1.2.8 + intel 10 compilers on Rocks-4.3 + Voltaire Infiniband + Voltaire Grid stack OFA roll. The 8 process parallel job is submitted through SGE: $ cat sge_submit.sh #!/bin/bash #$ -N OMPI-Blast-Job #$ -S /bin/bash #$ -cwd #$ -e

Re: [OMPI users] mpiblast + openmpi + gridengine job faila to run

2008-12-24 Thread Sangamesh B
23, 2008 at 4:45 PM, Reuti wrote: > Hi, > > Am 23.12.2008 um 12:03 schrieb Sangamesh B: > >> Hello, >> >> I've compiled MPIBLAST-1.5.0-pio app on Rocks 4.3,Voltaire >> infiniband based Linux cluster using Open MPI-1.2.8 + intel 10 >> compilers.

[OMPI users] mpiblast + openmpi + gridengine job faila to run

2008-12-23 Thread Sangamesh B
Hello, I've compiled MPIBLAST-1.5.0-pio app on Rocks 4.3,Voltaire infiniband based Linux cluster using Open MPI-1.2.8 + intel 10 compilers. The job is not running. Let me explain the configs: SGE job script: $ cat sge_submit.sh #!/bin/bash #$ -N OMPI-Blast-Job #$ -S /bin/bash #$ -cwd

Re: [OMPI users] Problem with feupdateenv

2008-12-10 Thread Sangamesh B
ed-intel' to your compile flags or > command line and that should get rid of that, if it bugs you. Someone else > can, I'm sure, explain in far more detail what the issue there is. > > Hope that helps.. if not, post the output of 'ldd hellompi' here, as well > as an &

[OMPI users] Problem with feupdateenv

2008-12-07 Thread Sangamesh B
Hello all, Installed Open MPI 1.2.8 with Intel C++compilers on Cent OS 4.5 based Rocks 4.3 linux cluster (& Voltaire infiniband). Installation was smooth. The following error occurred during compilation: # mpicc hellompi.c -o hellompi /opt/intel/cce/10.1.018/lib/libimf.so: warning: warning: feup

[OMPI users] OpenMPI-1.2.7 + SGE

2008-11-04 Thread Sangamesh B
Hi all, In Rocks-5.0 cluster, OpenMPI-1.2.6 comes by default. I guess it gets installed through rpm. # /opt/openmpi/bin/ompi_info | grep gridengine MCA ras: gridengine (MCA v1.0, API v1.3, Component v1.2.6) MCA pls: gridengine (MCA v1.0, API v1.3, Compon

Re: [OMPI users] Performance: MPICH2 vs OpenMPI

2008-10-25 Thread Sangamesh B
On Sat, Oct 25, 2008 at 12:33 PM, Sangamesh B wrote: > On Fri, Oct 24, 2008 at 11:26 PM, Eugene Loh wrote: >> Sangamesh B wrote: >> >>> I reinstalled all softwares with -O3 optimization. Following are the >>> performance numbers for a 4 process job on a single no

Re: [OMPI users] Performance: MPICH2 vs OpenMPI

2008-10-25 Thread Sangamesh B
On Fri, Oct 24, 2008 at 11:26 PM, Eugene Loh wrote: > Sangamesh B wrote: > >> I reinstalled all softwares with -O3 optimization. Following are the >> performance numbers for a 4 process job on a single node: >> >> MPICH2: 26 m 54 s >> OpenMPI: 24 m 39 s &

Re: [OMPI users] Performance: MPICH2 vs OpenMPI

2008-10-15 Thread Sangamesh B
On Fri, Oct 10, 2008 at 10:40 PM, Brian Dobbins wrote: > > Hi guys, > > On Fri, Oct 10, 2008 at 12:57 PM, Brock Palen wrote: > >> Actually I had a much differnt results, >> >> gromacs-3.3.1 one node dual core dual socket opt2218 openmpi-1.2.7 >> pgi/7.2 >> mpich2 gcc >> > >For some reason

Re: [OMPI users] Performance: MPICH2 vs OpenMPI

2008-10-10 Thread Sangamesh B
985 > > > > On Oct 9, 2008, at 8:06 AM, Sangamesh B wrote: > > >> >> On Thu, Oct 9, 2008 at 5:40 AM, Jeff Squyres wrote: >> On Oct 8, 2008, at 5:25 PM, Aurélien Bouteiller wrote: >> >> Make sure you don't use a "debug" build of Open M

Re: [OMPI users] Performance: MPICH2 vs OpenMPI

2008-10-09 Thread Sangamesh B
On Thu, Oct 9, 2008 at 5:40 AM, Jeff Squyres wrote: > On Oct 8, 2008, at 5:25 PM, Aurélien Bouteiller wrote: > > Make sure you don't use a "debug" build of Open MPI. If you use trunk, the >> build system detects it and turns on debug by default. It really kills >> performance. --disable-debug wi

Re: [OMPI users] Performance: MPICH2 vs OpenMPI

2008-10-09 Thread Sangamesh B
On Thu, Oct 9, 2008 at 2:39 AM, Brian Dobbins wrote: > > Hi guys, > > [From Eugene Loh:] > >> OpenMPI - 25 m 39 s. >>> MPICH2 - 15 m 53 s. >>> >> With regards to your issue, do you have any indication when you get that >> 25m39s timing if there is a grotesque amount of time being spent in MPI >

Re: [OMPI users] Performance: MPICH2 vs OpenMPI

2008-10-08 Thread Sangamesh B
FYI attached here OpenMPI install details On Wed, Oct 8, 2008 at 7:56 PM, Sangamesh B wrote: > > > On Wed, Oct 8, 2008 at 7:16 PM, Jeff Squyres wrote: > >> On Oct 8, 2008, at 9:10 AM, Sangamesh B wrote: >> >>I wanted to switch from mpich2/mvapich2 to Op

Re: [OMPI users] Performance: MPICH2 vs OpenMPI

2008-10-08 Thread Sangamesh B
On Wed, Oct 8, 2008 at 7:16 PM, Jeff Squyres wrote: > On Oct 8, 2008, at 9:10 AM, Sangamesh B wrote: > >I wanted to switch from mpich2/mvapich2 to OpenMPI, as OpenMPI >> supports both ethernet and infiniband. Before doing that I tested an >> application &#x

Re: [OMPI users] Performance: MPICH2 vs OpenMPI

2008-10-08 Thread Sangamesh B
ro_bench_8p OpenMPI: $ time /opt/ompi127/bin/mpirun -machinefile ./mach -np 8 /opt/apps/gromacs333_ompi/bin/mdrun_mpi | tee gromacs_openmpi_8process > > > Brock Palen > www.umich.edu/~brockp <http://www.umich.edu/%7Ebrockp> > Center for Advanced Computing > bro...@umich.edu

[OMPI users] Performance: MPICH2 vs OpenMPI

2008-10-08 Thread Sangamesh B
Hi All, I wanted to switch from mpich2/mvapich2 to OpenMPI, as OpenMPI supports both ethernet and infiniband. Before doing that I tested an application 'GROMACS' to compare the performance of MPICH2 & OpenMPI. Both have been compiled with GNU compilers. After this benchmark, I came to know