[OMPI users] Calculate time spent on non blocking communication?

2011-02-01 Thread Bibrak Qamar
Hello All,

I am using non-blocking send and receive, and i want to calculate the time
it took for the communication. Is there any method or a way to do this using
openmpi.

Thanks
Bibrak Qamar
Undergraduate Student BIT-9
Member Center for High Performance Scientific Computing
NUST-School of Electrical Engineering and Computer Science.


Re: [OMPI users] Calculate time spent on non blocking communication?

2011-02-01 Thread Eugene Loh

Bibrak Qamar wrote:


Hello All,

I am using non-blocking send and receive, and i want to calculate the 
time it took for the communication. Is there any method or a way to do 
this using openmpi.


You probably have to start by defining what you mean by "the time it 
took for the communication".  Anyhow, the Peruse instrumentation in OMPI 
might help.


Re: [OMPI users] Calculate time spent on non blocking communication?

2011-02-01 Thread Gustavo Correa

On Feb 1, 2011, at 1:09 AM, Bibrak Qamar wrote:

> Hello All,
> 
> I am using non-blocking send and receive, and i want to calculate the time it 
> took for the communication. Is there any method or a way to do this using 
> openmpi.
> 
> Thanks
> Bibrak Qamar
> Undergraduate Student BIT-9
> Member Center for High Performance Scientific Computing
> NUST-School of Electrical Engineering and Computer Science.
> ___

About the same as with blocking communication, I guess.

Would this do work for you?

start=MPI_Wtime()
MPI_Isend(...)
...
MPI_Irecv(...)
...
MPI_Wait[all](...)
end=MPI_Wtime()
print *, 'walltime = ', end-start

My two cents,
Gus Correa


[OMPI users] printing text fixes a problem?

2011-02-01 Thread abc def

Hello,

I'm having trouble with some MPI programming in Fortran, using openmpi.
It seems that my program doesn't work unless I print some unrelated text to the 
screen. For example, if I have this situation:

*** hundreds of lines cut ***
IF (irank .eq. 0) THEN
CALL print_results1(variable)
CALL print_results2(more_variable)
END IF
print *, "done", irank
CALL MPI_FINALIZE(ierr)
END PROGRAM calculation

The results are not printed unless I include this "print done irank" 
penultimate line.
Also, despite seeing that all ranks reach the print statement, the program 
hangs, as if they have not all reached MPI_FINALIZE.

Can anyone help me? Why does it do this?

I also had many times where the program would crash if I didn't include a print 
statement in a loop. I've been doing Fortran programming for a while, and this 
is my nightmare debugging scenario since I've never been able to figure out why 
the simple printing of statements magically fixes the program, and I usually 
end up having to go back to a serial solution, which is really slow.

If anyone might be able to help me, I would be really really grateful!!

Thank you.

Tom

  

Re: [OMPI users] printing text fixes a problem?

2011-02-01 Thread David Zhang
According to the mpi_finalize documentation, a call to mpi_finalize
terminate all processes.  I have ran into this problem before where one
process calls mpi_finalize before other processes reach the same line of
code and cause errors/hang ups.  Put a mpi_barrier(mpi_comm_world) before
mpi_finalize would do the trick.

On Mon, Jan 31, 2011 at 11:40 PM, abc def  wrote:

>  Hello,
>
> I'm having trouble with some MPI programming in Fortran, using openmpi.
> It seems that my program doesn't work unless I print some unrelated text to
> the screen. For example, if I have this situation:
>
> *** hundreds of lines cut ***
> IF (irank .eq. 0) THEN
> CALL print_results1(variable)
> CALL print_results2(more_variable)
> END IF
> print *, "done", irank
> CALL MPI_FINALIZE(ierr)
> END PROGRAM calculation
>
> The results are not printed unless I include this "print done irank"
> penultimate line.
> Also, despite seeing that all ranks reach the print statement, the program
> hangs, as if they have not all reached MPI_FINALIZE.
>
> Can anyone help me? Why does it do this?
>
> I also had many times where the program would crash if I didn't include a
> print statement in a loop. I've been doing Fortran programming for a while,
> and this is my nightmare debugging scenario since I've never been able to
> figure out why the simple printing of statements magically fixes the
> program, and I usually end up having to go back to a serial solution, which
> is really slow.
>
> If anyone might be able to help me, I would be really really grateful!!
>
> Thank you.
>
> Tom
>
>
> ___
> users mailing list
> us...@open-mpi.org
> http://www.open-mpi.org/mailman/listinfo.cgi/users
>



-- 
David Zhang
University of California, San Diego


Re: [OMPI users] printing text fixes a problem?

2011-02-01 Thread Jeff Squyres (jsquyres)
That's not quite right - a call to MPI-finalize does not terminate any 
processes. 

If you're seeing this kind of instability, check the usual suspects such as 
ensuring you have a totally homogeneous environment (same OS, same version of 
OMPI, etc). 

Sent from my PDA. No type good. 

On Feb 1, 2011, at 4:03 AM, "David Zhang"  wrote:

> According to the mpi_finalize documentation, a call to mpi_finalize terminate 
> all processes.  I have ran into this problem before where one process calls 
> mpi_finalize before other processes reach the same line of code and cause 
> errors/hang ups.  Put a mpi_barrier(mpi_comm_world) before mpi_finalize would 
> do the trick.
> 
> On Mon, Jan 31, 2011 at 11:40 PM, abc def  wrote:
> Hello,
> 
> I'm having trouble with some MPI programming in Fortran, using openmpi.
> It seems that my program doesn't work unless I print some unrelated text to 
> the screen. For example, if I have this situation:
> 
> *** hundreds of lines cut ***
> IF (irank .eq. 0) THEN
> CALL print_results1(variable)
> CALL print_results2(more_variable)
> END IF
> print *, "done", irank
> CALL MPI_FINALIZE(ierr)
> END PROGRAM calculation
> 
> The results are not printed unless I include this "print done irank" 
> penultimate line.
> Also, despite seeing that all ranks reach the print statement, the program 
> hangs, as if they have not all reached MPI_FINALIZE.
> 
> Can anyone help me? Why does it do this?
> 
> I also had many times where the program would crash if I didn't include a 
> print statement in a loop. I've been doing Fortran programming for a while, 
> and this is my nightmare debugging scenario since I've never been able to 
> figure out why the simple printing of statements magically fixes the program, 
> and I usually end up having to go back to a serial solution, which is really 
> slow.
> 
> If anyone might be able to help me, I would be really really grateful!!
> 
> Thank you.
> 
> Tom
> 
> 
> ___
> users mailing list
> us...@open-mpi.org
> http://www.open-mpi.org/mailman/listinfo.cgi/users
> 
> 
> 
> -- 
> David Zhang
> University of California, San Diego
> ___
> users mailing list
> us...@open-mpi.org
> http://www.open-mpi.org/mailman/listinfo.cgi/users


[OMPI users] Open MPI v1.5.1 Windows Installer with Fortran 77 bindings released

2011-02-01 Thread Shiqing Fan
The Open MPI Team, representing a consortium of research, academic, and 
industry partners, is pleased to announce the release of Open MPI 
version 1.5.1 Windows Installers with Fortran 77 bindings. This release 
is an Fortran 77 bindings update over the previous v1.5.1 release. We 
recommend that all users upgrade to this latest version when possible.


The latest Open MPI Windows installers can be downloaded from the main 
Open MPI web site or any of its mirrors (mirrors will be updating shortly).


Many thanks to Damien Hocking who helped us with Intel Fortran compiler 
issues for the Windows binaries.




[OMPI users] heterogenous cluster

2011-02-01 Thread jody
Hi

I have sofar used a homogenous 32-bit cluster.
Now i have added a new machine which is 64 bit

This means i have to reconfigure open MPI with `--enable-heterogeneous`, right?
Do i have to do this on every machine?
I don't remember all the option i had chosen when i first did the
configure - is there a way to find this out?

Thank You
  Jody


Re: [OMPI users] printing text fixes a problem?

2011-02-01 Thread David Zhang
Yes, that was a typo.  mpi_finalize terminates all mpi processings.

On Tue, Feb 1, 2011 at 3:25 AM, Jeff Squyres (jsquyres)
wrote:

> That's not quite right - a call to MPI-finalize does not terminate any
> processes.
>
> If you're seeing this kind of instability, check the usual suspects such as
> ensuring you have a totally homogeneous environment (same OS, same version
> of OMPI, etc).
>
> Sent from my PDA. No type good.
>
> On Feb 1, 2011, at 4:03 AM, "David Zhang"  wrote:
>
> According to the mpi_finalize documentation, a call to mpi_finalize
> terminate all processes.  I have ran into this problem before where one
> process calls mpi_finalize before other processes reach the same line of
> code and cause errors/hang ups.  Put a mpi_barrier(mpi_comm_world) before
> mpi_finalize would do the trick.
>
> On Mon, Jan 31, 2011 at 11:40 PM, abc def < 
> cannonj...@hotmail.co.uk> wrote:
>
>>  Hello,
>>
>> I'm having trouble with some MPI programming in Fortran, using openmpi.
>> It seems that my program doesn't work unless I print some unrelated text
>> to the screen. For example, if I have this situation:
>>
>> *** hundreds of lines cut ***
>> IF (irank .eq. 0) THEN
>> CALL print_results1(variable)
>> CALL print_results2(more_variable)
>> END IF
>> print *, "done", irank
>> CALL MPI_FINALIZE(ierr)
>> END PROGRAM calculation
>>
>> The results are not printed unless I include this "print done irank"
>> penultimate line.
>> Also, despite seeing that all ranks reach the print statement, the program
>> hangs, as if they have not all reached MPI_FINALIZE.
>>
>> Can anyone help me? Why does it do this?
>>
>> I also had many times where the program would crash if I didn't include a
>> print statement in a loop. I've been doing Fortran programming for a while,
>> and this is my nightmare debugging scenario since I've never been able to
>> figure out why the simple printing of statements magically fixes the
>> program, and I usually end up having to go back to a serial solution, which
>> is really slow.
>>
>> If anyone might be able to help me, I would be really really grateful!!
>>
>> Thank you.
>>
>> Tom
>>
>>
>> ___
>> users mailing list
>>  us...@open-mpi.org
>>  
>> http://www.open-mpi.org/mailman/listinfo.cgi/users
>>
>
>
>
> --
> David Zhang
> University of California, San Diego
>
> ___
> users mailing list
> us...@open-mpi.org
> http://www.open-mpi.org/mailman/listinfo.cgi/users
>
>
> ___
> users mailing list
> us...@open-mpi.org
> http://www.open-mpi.org/mailman/listinfo.cgi/users
>



-- 
David Zhang
University of California, San Diego


Re: [OMPI users] printing text fixes a problem?

2011-02-01 Thread Jeff Squyres
On Feb 1, 2011, at 1:03 PM, David Zhang wrote:

> Yes, that was a typo.  mpi_finalize terminates all mpi processings.

Just to nit-pick a little more (sorry!)...

MPI_Finalize terminates all MPI processings...in the process that calls it.  It 
does not terminate all MPI processings in other processes until they call 
MPI_Finalize.

-- 
Jeff Squyres
jsquy...@cisco.com
For corporate legal information go to:
http://www.cisco.com/web/about/doing_business/legal/cri/




Re: [OMPI users] heterogenous cluster

2011-02-01 Thread David Mathog

> I have sofar used a homogenous 32-bit cluster.
> Now i have added a new machine which is 64 bit
> 
> This means i have to reconfigure open MPI with
`--enable-heterogeneous`, right?

Not necessarily.  If you don't need the 64bit capabilities you could run
32 bit binaries along with a 32 bit version of OpenMPI.  At least that
approach has worked so far for me.

Regards,

David Mathog
mat...@caltech.edu
Manager, Sequence Analysis Facility, Biology Division, Caltech


[OMPI users] How closely tied is a specific release of OpenMPI to the host operating system and other system software?

2011-02-01 Thread Jeffrey A Cummings
I use OpenMPI on a variety of platforms:  stand-alone servers running 
Solaris on sparc boxes and Linux (mostly CentOS) on AMD/Intel boxes, also 
Linux (again CentOS) on large clusters of AMD/Intel boxes.  These 
platforms all have some version of the 1.3 OpenMPI stream.  I recently 
requested an upgrade on all systems to 1.4.3 (for production work) and 
1.5.1 (for experimentation).  I'm getting a lot of push back from the 
SysAdmin folks claiming that OpenMPI is closely intertwined with the 
specific version of the operating system and/or other system software 
(i.e., Rocks on the clusters).  I need to know if they are telling me the 
truth or if they're just making excuses to avoid the work.  To state my 
question another way:  Apparently each release of Linux and/or Rocks comes 
with some version of OpenMPI bundled in.  Is it dangerous in some way to 
upgrade to a newer version of OpenMPI?  Thanks in advance for any insight 
anyone can provide.

- Jeff

Re: [OMPI users] How closely tied is a specific release of OpenMPI to the host operating system and other system software?

2011-02-01 Thread Richard Walsh

Jeff,

We have 3 Rocks Clusters, while there is a default MPI with each
Rocks Release, it is often behind the latest production release as
you note.

We typically install whatever OpenMPI version we want in a shared space
and ignore the default installed with Rocks.  Sometimes there standard
Linux libraries that can be a bit out of date which may be registered as
"can't finds" in the configuration and/or buiild of OpenMPI, but there usually 
an
easy go around.  As far as 'closely intertwined' goes, I would say that
is an exaggeration.

It does mean some extra work for someone ... around here is it me ... ;-) ...

rbw

Richard Walsh
Parallel Applications and Systems Manager
CUNY HPC Center, Staten Island, NY
718-982-3319
612-382-4620

Reason does give the heart pause;
As the heart gives reason fits.

Yet, to live where reason always rules;
Is to kill one's heart with wits.

From: users-boun...@open-mpi.org [users-boun...@open-mpi.org] on behalf of 
Jeffrey A Cummings [jeffrey.a.cummi...@aero.org]
Sent: Tuesday, February 01, 2011 5:02 PM
To: us...@open-mpi.org
Subject: [OMPI users] How closely tied is a specific release of OpenMPI to the 
host operating system and other system software?

I use OpenMPI on a variety of platforms:  stand-alone servers running Solaris 
on sparc boxes and Linux (mostly CentOS) on AMD/Intel boxes, also Linux (again 
CentOS) on large clusters of AMD/Intel boxes.  These platforms all have some 
version of the 1.3 OpenMPI stream.  I recently requested an upgrade on all 
systems to 1.4.3 (for production work) and 1.5.1 (for experimentation).  I'm 
getting a lot of push back from the SysAdmin folks claiming that OpenMPI is 
closely intertwined with the specific version of the operating system and/or 
other system software (i.e., Rocks on the clusters).  I need to know if they 
are telling me the truth or if they're just making excuses to avoid the work.  
To state my question another way:  Apparently each release of Linux and/or 
Rocks comes with some version of OpenMPI bundled in.  Is it dangerous in some 
way to upgrade to a newer version of OpenMPI?  Thanks in advance for any 
insight anyone can provide.

- Jeff



Think green before you print this email.


Re: [OMPI users] How closely tied is a specific release of OpenMPI to the host operating system and other system software?

2011-02-01 Thread Doug Reeder
Jeff,

We have similar circumstances and have been able to install and use versions of 
openmpi newer than supplied with the OS. It is necessary  to have some means of 
path management to ensure that applications build against the desired version 
of openmpi and run with the version of openmpi they were built with. We use the 
module system for this path management. We create modules for each version of 
openmpi and each version of the applications. We than include the appropriate 
openmpi module in the module for the application. Then when a user loads a 
module for their application they automatically get the correct version of 
openmpi.

Doug Reeder
On Feb 1, 2011, at 2:02 PM, Jeffrey A Cummings wrote:

> I use OpenMPI on a variety of platforms:  stand-alone servers running Solaris 
> on sparc boxes and Linux (mostly CentOS) on AMD/Intel boxes, also Linux 
> (again CentOS) on large clusters of AMD/Intel boxes.  These platforms all 
> have some version of the 1.3 OpenMPI stream.  I recently requested an upgrade 
> on all systems to 1.4.3 (for production work) and 1.5.1 (for 
> experimentation).  I'm getting a lot of push back from the SysAdmin folks 
> claiming that OpenMPI is closely intertwined with the specific version of the 
> operating system and/or other system software (i.e., Rocks on the clusters).  
> I need to know if they are telling me the truth or if they're just making 
> excuses to avoid the work.  To state my question another way:  Apparently 
> each release of Linux and/or Rocks comes with some version of OpenMPI bundled 
> in.  Is it dangerous in some way to upgrade to a newer version of OpenMPI?  
> Thanks in advance for any insight anyone can provide. 
> 
> - Jeff___
> users mailing list
> us...@open-mpi.org
> http://www.open-mpi.org/mailman/listinfo.cgi/users



Re: [OMPI users] How closely tied is a specific release of OpenMPI to the host operating system and other system software?

2011-02-01 Thread Reuti
Am 01.02.2011 um 23:02 schrieb Jeffrey A Cummings:

> I use OpenMPI on a variety of platforms:  stand-alone servers running Solaris 
> on sparc boxes and Linux (mostly CentOS) on AMD/Intel boxes, also Linux 
> (again CentOS) on large clusters of AMD/Intel boxes.  These platforms all 
> have some version of the 1.3 OpenMPI stream.  I recently requested an upgrade 
> on all systems to 1.4.3 (for production work) and 1.5.1 (for 
> experimentation).  I'm getting a lot of push back from the SysAdmin folks 
> claiming that OpenMPI is closely intertwined with the specific version of the 
> operating system and/or other system software (i.e., Rocks on the clusters).  
> I need to know if they are telling me the truth or if they're just making 
> excuses to avoid the work.

Maybe ROCKS or whatever provides only one version. Anyway: you can download 
Open MPI, compile it to build into e.g. ~/local/openmpi-1.4.3, adjust your 
PATHs and you are done.

Unless you build it with static libraries, it might in addition be necessary to 
adjust LD_LIBRARY_PATH at runtime.

I use most often my own version on the clusters I have access to and disregard 
any installed one.

-- Reuti


>  To state my question another way:  Apparently each release of Linux and/or 
> Rocks comes with some version of OpenMPI bundled in.  Is it dangerous in some 
> way to upgrade to a newer version of OpenMPI?  Thanks in advance for any 
> insight anyone can provide. 
> 
> - Jeff___
> users mailing list
> us...@open-mpi.org
> http://www.open-mpi.org/mailman/listinfo.cgi/users




Re: [OMPI users] How closely tied is a specific release of OpenMPI to the host operating system and other system software?

2011-02-01 Thread Gustavo Correa

On Feb 1, 2011, at 5:02 PM, Jeffrey A Cummings wrote:

> I use OpenMPI on a variety of platforms:  stand-alone servers running Solaris 
> on sparc boxes and Linux (mostly CentOS) on AMD/Intel boxes, also Linux 
> (again CentOS) on large clusters of AMD/Intel boxes.  These platforms all 
> have some version of the 1.3 OpenMPI stream.  I recently requested an upgrade 
> on all systems to 1.4.3 (for production work) and 1.5.1 (for 
> experimentation).  I'm getting a lot of push back from the SysAdmin folks 
> claiming that OpenMPI is closely intertwined with the specific version of the 
> operating system and/or other system software (i.e., Rocks on the clusters).  
> I need to know if they are telling me the truth or if they're just making 
> excuses to avoid the work.  To state my question another way:  Apparently 
> each release of Linux and/or Rocks comes with some version of OpenMPI bundled 
> in.  Is it dangerous in some way to upgrade to a newer version of OpenMPI?  
> Thanks in advance for any insight anyone can provide. 
> 
> - Jeff___

Hi Jeffrey

As others said, Rocks has a default MPI (some version OpenMPI built with Gnu 
compilers with
support for Ethernet only) which comes with the "hpc" Rocks roll.
You can use that MPI, but you don't have to.

This doesn't prevent you to install any other version of OpenMPI (actually of 
any other software)
with support to whatever you have (e.g. Infiniband, Torque resource manager, 
using other compilers than Gnu, etc).

The right location to install on Rocks is the /share/apps directory of the 
head/frontend node,
which is NFS mounted on the nodes.
It is wise to use subdirectories with names identifying your version somehow,
e.g. /share/apps/ompi-1.4.3/intel-11.1.020, for something compiled with intel 
compilers.

The --prefix=/share/apps/bla/bla option of OpenMPI configure will put the 
installed directory tree wherever you want.

'configure --help' will tell tons of possibilities (e.g. tight coupling with 
Torque os SGE,
Infinband support, etc).

You also need to set the user environment.

A simple minded way is to prepend the OpenMPI bin directory to the PATH
environment variable (say in the .bashrc/.cshrc user file), and the lib 
directory
to the LD_LIBRARY_PATH.
Adding share/man to the MANPATH is not mandatory, but helpful.
This is rather inflexible and requires editing those initialization files every 
time you
want to switch the MPI version you use, though.

A much better and flexible way, as was also mentioned, is to use the 
environment modules,
but your Sys Admin must be willing to learn how to write the corresponding 
module files
(in Tcl/Tk jargon).
This will allow you switch to across different versions by just issuing a 
command line
like 'module switch path/to/old/version  path/to/new/version'.

See:
http://modules.sourceforge.net/

I can't speak about Solaris, but it also supports environment modules, if I am 
not mistaken.

I hope this helps,
Gus Correa




Re: [OMPI users] How closely tied is a specific release of OpenMPI to the host operating system and other system software?

2011-02-01 Thread Jeff Squyres
On Feb 1, 2011, at 5:02 PM, Jeffrey A Cummings wrote:

> I'm getting a lot of push back from the SysAdmin folks claiming that OpenMPI 
> is closely intertwined with the specific version of the operating system 
> and/or other system software (i.e., Rocks on the clusters).  

I wouldn't say that this is true.  We test across a wide variety of OS's and 
compilers.  I'm sure that there are particular platforms/environments that can 
trip up some kind of problem (it's happened before), but in general, Open MPI 
is pretty portable.

> To state my question another way:  Apparently each release of Linux and/or 
> Rocks comes with some version of OpenMPI bundled in.  Is it dangerous in some 
> way to upgrade to a newer version of OpenMPI?  

Not at all.  Others have said it, but I'm one of the developers and I'll 
reinforce their answers: I regularly have about a dozen different installations 
of Open MPI on my cluster at any given time (all in different stages of 
development -- all installed to different prefixes).  I switch between them 
quite easily by changing my PATH and LD_LIBRARY_PATH (both locally and on 
remote nodes).

-- 
Jeff Squyres
jsquy...@cisco.com
For corporate legal information go to:
http://www.cisco.com/web/about/doing_business/legal/cri/