Thanks Brett for the useful information.
On Wed, Jan 27, 2010 at 12:40 PM, Brett Pemberton wrote:
>
> - "Sangamesh B" wrote:
>
> > Hi all,
> >
> > If an infiniband network is configured successfully, how to confirm
> > that Open MPI is u
Hi all,
If an infiniband network is configured successfully, how to confirm
that Open MPI is using infiniband, not other ethernet network available?
In earlier versions, I've seen if OMPI is running on ethernet, it was giving
warning - its runnig on slower network. Is this available in 1.3
Hi,
What are the advantages with progress-threads feature?
Thanks,
Sangamesh
On Fri, Jan 8, 2010 at 10:13 PM, Ralph Castain wrote:
> Yeah, the system doesn't currently support enable-progress-threads. It is a
> two-fold problem: ORTE won't work that way, and some parts of the MPI layer
> w
Hi,
MPICh2 has different process managers: MPD, SMPD, GFORKER etc. Is
the Open MPI's startup daemon orted similar to MPICH2's smpd? Or something
else?
Thanks,
Sangamesh
Ibdiaget, it is open source IB
> network diagnostic tool :
> http://linux.die.net/man/1/ibdiagnet
> The tool is part of OFED distribution.
>
> Pasha.
>
>
> Sangamesh B wrote:
>
>> Dear all,
>> The CPMD application which is compiled with OpenMPI-1.3 (Intel
Hi all,
The compilation of a fortran application - CPMD-3.13.2 - with OpenMP +
OpenMPI-1.3.3 + ifort-10.1 + MKL-10.0 is failing with following error on a
Rocks-5.1 Linux cluster:
/lib/cpp -P -C -traditional -D__Linux -D__PGI -DFFT_DEFAULT -DPOINTER8
-DLINUX_IFC -DPARALLEL -DMYRINET ./potfor
support team.
http://software.intel.com/en-us/forums/intel-math-kernel-library/topic/69104/
Is it possible to get it installed?
Thanks,
Sangamesh
> The current version is 1.3.3!
>
> Jody
>
> On Tue, Oct 20, 2009 at 2:48 PM, Sangamesh B wrote:
> > Hi,
> >
> > I
Hi,
Its required here to install Open MPI 1.2 on a HPC cluster with - Cent
OS 5.2 Linux, Mellanox IB card, switch and OFED-1.4.
But the configure is failing with:
[root@master openmpi-1.2]# ./configure --prefix=/opt/mpi/openmpi/1.2/intel
--with-openib=/usr
..
...
--- MCA component btl:openi
n MPI processes. However, our runtime is still allowed to use TCP, and
> this is what you see on your netstat. These are not performance critical
> communications (i.e. only startup the job, distribute the contact
> informations and so on).
>
> Have you run the IB tests to validate th
Any hint for the previous mail?
Does Open MPI-1.3.3 support only a limited versions of OFED?
Or any version is ok?
On Sun, Oct 11, 2009 at 3:55 PM, Sangamesh B wrote:
> Hi,
>
> A fortran application is installed with Intel Fortran 10.1, MKL-10 and
> Openmpi-1.3.3 on a Rocks-5
Hi,
A fortran application is installed with Intel Fortran 10.1, MKL-10 and
Openmpi-1.3.3 on a Rocks-5.1 HPC Linux cluster. The jobs are not scaling
when more than one node is used. The cluster has Intel Quad core Xeon
(E5472) @ 3.00GHz Dual processor (total 8 cores per node, 16GB RAM) and
Infiniba
Hi,
A fortran application which is compiled with ifort-10.1 and open mpi
1.3.1 on Cent OS 5.2 fails after running 4 days with following error
message:
[compute-0-7:25430] *** Process received signal ***
[compute-0-7:25433] *** Process received signal ***
[compute-0-7:25433] Signal: Bus error
Dear all,
The CPMD application which is compiled with OpenMPI-1.3 (Intel 10.1
compilers) on CentOS-4.5, fails only, when a specific node i.e. node-0-2 is
involved. But runs well on other nodes.
Initially job failed after 5-10 mins (on node-0-2 + some other
nodes). After googling error,
ould be the differentiating factors.
>
> The standard wat32 benchmark is a good test for a single node. You can find
> our benchmarking results here if you want to compare yours
> http://www.cse.scitech.ac.uk/disco/dbd/index.html
>
> Regards,
>
> INK
>
> 2009/3/10 Sangames
> access patterns, particularly across UMA machines like clovertown and
> follow-in intel architectures can really get bogged down by the RAM
> bottlneck (all 8 cores hammering on memory simultaneously via a single
> memory bus).
>
>
>
> On Mar 9, 2009, at 10:30 AM,
Dear Open MPI team,
With Open MPI-1.3, the fortran application CPMD is installed on
Rocks-4.3 cluster - Dual Processor Quad core Xeon @ 3 GHz. (8 cores
per node)
Two jobs (4 processes job) are run on two nodes, separately - one node
has a ib connection ( 4 GB RAM) and the other node has gi
difference between cpu time and elapsed time? Is
>> your
>> code doing any file IO or maybe waiting for one of the processors? Do you
>> use
>> non-blocking communication wherever possible?
>>
>> Regards,
>>
>> Mattijs
>>
>> On Wednesday
2.23 SECONDS
No of nodes:6 cores used per node:4 total core: 6*4=24
CPU TIME :0 HOURS 51 MINUTES 50.41 SECONDS
ELAPSED TIME :6 HOURS 6 MINUTES 38.67 SECONDS
Any help/suggetsions to diagnose this problem.
Thanks,
Sangamesh
On Wed, Feb 25, 2009 at 12:51 PM, Sangamesh B
Hello Reuti,
I'm sorry for the late response.
On Mon, Jan 26, 2009 at 7:11 PM, Reuti wrote:
> Am 25.01.2009 um 06:16 schrieb Sangamesh B:
>
>> Thanks Reuti for the reply.
>>
>> On Sun, Jan 25, 2009 at 2:22 AM, Reuti wrote:
>>>
>>> Am 24.01.2
Dear All,
A fortran application is installed with Open MPI-1.3 + Intel
compilers on a Rocks-4.3 cluster with Intel Xeon Dual socket Quad core
processor @ 3GHz (8cores/node).
The time consumed for different tests over a Gigabit connected
nodes are as follows: (Each node has 8 GB memory).
On Mon, Feb 2, 2009 at 12:15 PM, Reuti wrote:
> Am 02.02.2009 um 05:44 schrieb Sangamesh B:
>
>> On Sun, Feb 1, 2009 at 10:37 PM, Reuti wrote:
>>>
>>> Am 01.02.2009 um 16:00 schrieb Sangamesh B:
>>>
>>>> On Sat, Jan 31, 2009 at 6:27 PM, Re
On Sun, Feb 1, 2009 at 10:37 PM, Reuti wrote:
> Am 01.02.2009 um 16:00 schrieb Sangamesh B:
>
>> On Sat, Jan 31, 2009 at 6:27 PM, Reuti wrote:
>>>
>>> Am 31.01.2009 um 08:49 schrieb Sangamesh B:
>>>
>>>> On Fri, Jan 30, 2009 at 10:20 PM, Re
On Sat, Jan 31, 2009 at 6:27 PM, Reuti wrote:
> Am 31.01.2009 um 08:49 schrieb Sangamesh B:
>
>> On Fri, Jan 30, 2009 at 10:20 PM, Reuti
>> wrote:
>>>
>>> Am 30.01.2009 um 15:02 schrieb Sangamesh B:
>>>
>>>> Dear Open MPI,
>>>&
On Fri, Jan 30, 2009 at 10:20 PM, Reuti wrote:
> Am 30.01.2009 um 15:02 schrieb Sangamesh B:
>
>> Dear Open MPI,
>>
>> Do you have a solution for the following problem of Open MPI (1.3)
>> when run through Grid Engine.
>>
>> I changed global exe
Dear Open MPI,
Do you have a solution for the following problem of Open MPI (1.3)
when run through Grid Engine.
I changed global execd params with H_MEMORYLOCKED=infinity and
restarted the sgeexecd in all nodes.
But still the problem persists:
$cat err.77.CPMD-OMPI
ssh_exchange_identification:
s:
between master & node: works fine but with some delay.
between nodes: works fine, no delay
>From command line the open mpi jobs were run with no error, even
master node is not used in hostfile.
Thanks,
Sangamesh
> -- Reuti
>
>
>> Jeremy Stout
>>
>> On Sat, J
ding "ulimit -l unlimited" near
> the top of the SGE startup script on the computation nodes and
> restarting SGE on every node.
>
> Jeremy Stout
>
> On Sat, Jan 24, 2009 at 6:06 AM, Sangamesh B wrote:
>> Hello all,
>>
>> Open MPI 1.3 is installed on Rocks
Hello all,
Open MPI 1.3 is installed on Rocks 4.3 Linux cluster with support of
SGE i.e using --with-sge.
But the ompi_info shows only one component:
# /opt/mpi/openmpi/1.3/intel/bin/ompi_info | grep gridengine
MCA ras: gridengine (MCA v2.0, API v2.0, Component v1.3)
Is this ri
Any solution for the following problem?
On Fri, Jan 23, 2009 at 7:58 PM, Sangamesh B wrote:
> On Fri, Jan 23, 2009 at 5:41 PM, Jeff Squyres wrote:
>> On Jan 22, 2009, at 11:26 PM, Sangamesh B wrote:
>>
>>> We''ve a cluster with 23 nodes connected to IB switch
On Fri, Jan 23, 2009 at 5:41 PM, Jeff Squyres wrote:
> On Jan 22, 2009, at 11:26 PM, Sangamesh B wrote:
>
>> We''ve a cluster with 23 nodes connected to IB switch and 8 nodes
>> have connected to ethernet switch. Master node is also connected to IB
>> switc
Hello all,
We''ve a cluster with 23 nodes connected to IB switch and 8 nodes
have connected to ethernet switch. Master node is also connected to IB
switch. SGE(with tight integration, -pe orte) is used for
parallel/serial job submission.
Open MPI-1.3 is installed on master node with IB suppo
Hello all,
The MPI-Blast-PIO-1.5.0 is installed with Open MPI 1.2.8 + intel 10
compilers on Rocks-4.3 + Voltaire Infiniband + Voltaire Grid stack OFA
roll.
The 8 process parallel job is submitted through SGE:
$ cat sge_submit.sh
#!/bin/bash
#$ -N OMPI-Blast-Job
#$ -S /bin/bash
#$ -cwd
#$ -e
23, 2008 at 4:45 PM, Reuti wrote:
> Hi,
>
> Am 23.12.2008 um 12:03 schrieb Sangamesh B:
>
>> Hello,
>>
>> I've compiled MPIBLAST-1.5.0-pio app on Rocks 4.3,Voltaire
>> infiniband based Linux cluster using Open MPI-1.2.8 + intel 10
>> compilers.
Hello,
I've compiled MPIBLAST-1.5.0-pio app on Rocks 4.3,Voltaire
infiniband based Linux cluster using Open MPI-1.2.8 + intel 10
compilers.
The job is not running. Let me explain the configs:
SGE job script:
$ cat sge_submit.sh
#!/bin/bash
#$ -N OMPI-Blast-Job
#$ -S /bin/bash
#$ -cwd
ed-intel' to your compile flags or
> command line and that should get rid of that, if it bugs you. Someone else
> can, I'm sure, explain in far more detail what the issue there is.
>
> Hope that helps.. if not, post the output of 'ldd hellompi' here, as well
> as an &
Hello all,
Installed Open MPI 1.2.8 with Intel C++compilers on Cent OS 4.5 based
Rocks 4.3 linux cluster (& Voltaire infiniband). Installation was
smooth.
The following error occurred during compilation:
# mpicc hellompi.c -o hellompi
/opt/intel/cce/10.1.018/lib/libimf.so: warning: warning: feup
Hi all,
In Rocks-5.0 cluster, OpenMPI-1.2.6 comes by default. I guess it
gets installed through rpm.
# /opt/openmpi/bin/ompi_info | grep gridengine
MCA ras: gridengine (MCA v1.0, API v1.3, Component v1.2.6)
MCA pls: gridengine (MCA v1.0, API v1.3, Compon
On Sat, Oct 25, 2008 at 12:33 PM, Sangamesh B wrote:
> On Fri, Oct 24, 2008 at 11:26 PM, Eugene Loh wrote:
>> Sangamesh B wrote:
>>
>>> I reinstalled all softwares with -O3 optimization. Following are the
>>> performance numbers for a 4 process job on a single no
On Fri, Oct 24, 2008 at 11:26 PM, Eugene Loh wrote:
> Sangamesh B wrote:
>
>> I reinstalled all softwares with -O3 optimization. Following are the
>> performance numbers for a 4 process job on a single node:
>>
>> MPICH2: 26 m 54 s
>> OpenMPI: 24 m 39 s
&
On Fri, Oct 10, 2008 at 10:40 PM, Brian Dobbins wrote:
>
> Hi guys,
>
> On Fri, Oct 10, 2008 at 12:57 PM, Brock Palen wrote:
>
>> Actually I had a much differnt results,
>>
>> gromacs-3.3.1 one node dual core dual socket opt2218 openmpi-1.2.7
>> pgi/7.2
>> mpich2 gcc
>>
>
>For some reason
985
>
>
>
> On Oct 9, 2008, at 8:06 AM, Sangamesh B wrote:
>
>
>>
>> On Thu, Oct 9, 2008 at 5:40 AM, Jeff Squyres wrote:
>> On Oct 8, 2008, at 5:25 PM, Aurélien Bouteiller wrote:
>>
>> Make sure you don't use a "debug" build of Open M
On Thu, Oct 9, 2008 at 5:40 AM, Jeff Squyres wrote:
> On Oct 8, 2008, at 5:25 PM, Aurélien Bouteiller wrote:
>
> Make sure you don't use a "debug" build of Open MPI. If you use trunk, the
>> build system detects it and turns on debug by default. It really kills
>> performance. --disable-debug wi
On Thu, Oct 9, 2008 at 2:39 AM, Brian Dobbins wrote:
>
> Hi guys,
>
> [From Eugene Loh:]
>
>> OpenMPI - 25 m 39 s.
>>> MPICH2 - 15 m 53 s.
>>>
>> With regards to your issue, do you have any indication when you get that
>> 25m39s timing if there is a grotesque amount of time being spent in MPI
>
FYI attached here OpenMPI install details
On Wed, Oct 8, 2008 at 7:56 PM, Sangamesh B wrote:
>
>
> On Wed, Oct 8, 2008 at 7:16 PM, Jeff Squyres wrote:
>
>> On Oct 8, 2008, at 9:10 AM, Sangamesh B wrote:
>>
>>I wanted to switch from mpich2/mvapich2 to Op
On Wed, Oct 8, 2008 at 7:16 PM, Jeff Squyres wrote:
> On Oct 8, 2008, at 9:10 AM, Sangamesh B wrote:
>
>I wanted to switch from mpich2/mvapich2 to OpenMPI, as OpenMPI
>> supports both ethernet and infiniband. Before doing that I tested an
>> application
ro_bench_8p
OpenMPI:
$ time /opt/ompi127/bin/mpirun -machinefile ./mach -np 8
/opt/apps/gromacs333_ompi/bin/mdrun_mpi | tee gromacs_openmpi_8process
>
>
> Brock Palen
> www.umich.edu/~brockp <http://www.umich.edu/%7Ebrockp>
> Center for Advanced Computing
> bro...@umich.edu
Hi All,
I wanted to switch from mpich2/mvapich2 to OpenMPI, as OpenMPI
supports both ethernet and infiniband. Before doing that I tested an
application 'GROMACS' to compare the performance of MPICH2 & OpenMPI. Both
have been compiled with GNU compilers.
After this benchmark, I came to know
47 matches
Mail list logo