Hello All,
I am using non-blocking send and receive, and i want to calculate the time
it took for the communication. Is there any method or a way to do this using
openmpi.
Thanks
Bibrak Qamar
Undergraduate Student BIT-9
Member Center for High Performance Scientific Computing
NUST-School of Electr
Bibrak Qamar wrote:
Hello All,
I am using non-blocking send and receive, and i want to calculate the
time it took for the communication. Is there any method or a way to do
this using openmpi.
You probably have to start by defining what you mean by "the time it
took for the communication".
On Feb 1, 2011, at 1:09 AM, Bibrak Qamar wrote:
> Hello All,
>
> I am using non-blocking send and receive, and i want to calculate the time it
> took for the communication. Is there any method or a way to do this using
> openmpi.
>
> Thanks
> Bibrak Qamar
> Undergraduate Student BIT-9
> Membe
Hello,
I'm having trouble with some MPI programming in Fortran, using openmpi.
It seems that my program doesn't work unless I print some unrelated text to the
screen. For example, if I have this situation:
*** hundreds of lines cut ***
IF (irank .eq. 0) THEN
CALL print_results1(variable)
According to the mpi_finalize documentation, a call to mpi_finalize
terminate all processes. I have ran into this problem before where one
process calls mpi_finalize before other processes reach the same line of
code and cause errors/hang ups. Put a mpi_barrier(mpi_comm_world) before
mpi_finalize
That's not quite right - a call to MPI-finalize does not terminate any
processes.
If you're seeing this kind of instability, check the usual suspects such as
ensuring you have a totally homogeneous environment (same OS, same version of
OMPI, etc).
Sent from my PDA. No type good.
On Feb 1,
The Open MPI Team, representing a consortium of research, academic, and
industry partners, is pleased to announce the release of Open MPI
version 1.5.1 Windows Installers with Fortran 77 bindings. This release
is an Fortran 77 bindings update over the previous v1.5.1 release. We
recommend that
Hi
I have sofar used a homogenous 32-bit cluster.
Now i have added a new machine which is 64 bit
This means i have to reconfigure open MPI with `--enable-heterogeneous`, right?
Do i have to do this on every machine?
I don't remember all the option i had chosen when i first did the
configure - is
Yes, that was a typo. mpi_finalize terminates all mpi processings.
On Tue, Feb 1, 2011 at 3:25 AM, Jeff Squyres (jsquyres)
wrote:
> That's not quite right - a call to MPI-finalize does not terminate any
> processes.
>
> If you're seeing this kind of instability, check the usual suspects such as
On Feb 1, 2011, at 1:03 PM, David Zhang wrote:
> Yes, that was a typo. mpi_finalize terminates all mpi processings.
Just to nit-pick a little more (sorry!)...
MPI_Finalize terminates all MPI processings...in the process that calls it. It
does not terminate all MPI processings in other process
> I have sofar used a homogenous 32-bit cluster.
> Now i have added a new machine which is 64 bit
>
> This means i have to reconfigure open MPI with
`--enable-heterogeneous`, right?
Not necessarily. If you don't need the 64bit capabilities you could run
32 bit binaries along with a 32 bit versi
I use OpenMPI on a variety of platforms: stand-alone servers running
Solaris on sparc boxes and Linux (mostly CentOS) on AMD/Intel boxes, also
Linux (again CentOS) on large clusters of AMD/Intel boxes. These
platforms all have some version of the 1.3 OpenMPI stream. I recently
requested an u
Jeff,
We have 3 Rocks Clusters, while there is a default MPI with each
Rocks Release, it is often behind the latest production release as
you note.
We typically install whatever OpenMPI version we want in a shared space
and ignore the default installed with Rocks. Sometimes there standard
Linux
Jeff,
We have similar circumstances and have been able to install and use versions of
openmpi newer than supplied with the OS. It is necessary to have some means of
path management to ensure that applications build against the desired version
of openmpi and run with the version of openmpi they
Am 01.02.2011 um 23:02 schrieb Jeffrey A Cummings:
> I use OpenMPI on a variety of platforms: stand-alone servers running Solaris
> on sparc boxes and Linux (mostly CentOS) on AMD/Intel boxes, also Linux
> (again CentOS) on large clusters of AMD/Intel boxes. These platforms all
> have some ve
On Feb 1, 2011, at 5:02 PM, Jeffrey A Cummings wrote:
> I use OpenMPI on a variety of platforms: stand-alone servers running Solaris
> on sparc boxes and Linux (mostly CentOS) on AMD/Intel boxes, also Linux
> (again CentOS) on large clusters of AMD/Intel boxes. These platforms all
> have som
On Feb 1, 2011, at 5:02 PM, Jeffrey A Cummings wrote:
> I'm getting a lot of push back from the SysAdmin folks claiming that OpenMPI
> is closely intertwined with the specific version of the operating system
> and/or other system software (i.e., Rocks on the clusters).
I wouldn't say that thi
17 matches
Mail list logo