[OMPI users] How to know which task on which node

2009-01-19 Thread gaurav gupta
Hello,

I want to know that which task is running on which node. Is there any way to
know this.
Is there any profiling tool provided along with openmpi to calculate time
taken in various steps.

-- 
GAURAV GUPTA
B.Tech III Yr. , Department of Computer Science & Engineering
IT BHU , Varanasi
Contacts
Phone No: +91-99569-49491

e-mail :
gaurav.gu...@acm.org
gaurav.gupta.cs...@itbhu.ac.in
1989.gau...@gmail.com


Re: [OMPI users] How to know which task on which node

2009-01-19 Thread Gijsbert Wiesenekker

gaurav gupta wrote:

Hello,

I want to know that which task is running on which node. Is there any 
way to know this.
Is there any profiling tool provided along with openmpi to calculate 
time taken in various steps.


--
GAURAV GUPTA
B.Tech III Yr. , Department of Computer Science & Engineering
IT BHU , Varanasi
Contacts
Phone No: +91-99569-49491

e-mail :
gaurav.gu...@acm.org 
gaurav.gupta.cs...@itbhu.ac.in 
1989.gau...@gmail.com 


___
users mailing list
us...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/users

Hi Gupta,

I ran into the same problem. In my case I wanted to define the root node 
on a specific host for a synchronization step using rsync between the 
hosts running the processes. Here is some linux C code that might help 
you. It builds an array mpi_host with the hostname of each node, and an 
index array mpi_host_rank that shows which processes are running on the 
same node. The BUG, MY_MALLOC and my_printf macro's are wrappers for 
C-functions assert, malloc and printf. The code assumes name-resolution 
is the same on all nodes.


#define LINE_MAX 1024
#define MPI_NPROCS_MAX 256
#define INVALID (-1)

int mpi_nprocs;
int mpi_id;
int mpi_nhosts;
int mpi_root_id;
char *mpi_hosts;
char *mpi_host[MPI_NPROCS_MAX];
int mpi_host_rank[MPI_NPROCS_MAX];

int main(void)
{
   int iproc;
   char hostname[LINE_MAX];

   mpi_nprocs = 1;
   mpi_id = 0;
   mpi_nhosts = 1;
   mpi_root_id = 0;

   MPI_Init(&argc, &argv);
   MPI_Comm_size(MPI_COMM_WORLD, &mpi_nprocs);
   BUG(mpi_nprocs > MPI_NPROCS_MAX)
   MPI_Comm_rank(MPI_COMM_WORLD, &mpi_id);

   BUG(gethostname(hostname, LINE_MAX) != 0)

   REGISTER_MALLOC(mpi_hosts, char, LINE_MAX * mpi_nprocs)
   for (iproc = 0; iproc < mpi_nprocs; iproc++)
   mpi_host[iproc] = mpi_hosts + iproc * LINE_MAX;
   if (mpi_nprocs == 1)
   strcpy(mpi_host[0], hostname);
   else
   MPI_Allgather(hostname, LINE_MAX, MPI_CHAR,
   mpi_hosts, LINE_MAX, MPI_CHAR, MPI_COMM_WORLD);

   MPI_Barrier(MPI_COMM_WORLD);
   for (iproc = 0; iproc < mpi_nprocs; iproc++)
   mpi_host_rank[iproc] = INVALID;
   mpi_nhosts = 0;

   for (iproc = 0; iproc < mpi_nprocs; iproc++)
   {
   int jproc;

   if (mpi_host_rank[iproc] != INVALID) continue;
   ++mpi_nhosts;
   BUG(mpi_nhosts > mpi_nprocs)
   mpi_host_rank[iproc] = mpi_nhosts - 1;
   for (jproc = iproc + 1; jproc < mpi_nprocs; jproc++)
   {
   if (mpi_host_rank[jproc] != INVALID) continue;
   if (strcasecmp(mpi_host[jproc], mpi_host[iproc]) == 0)
   mpi_host_rank[jproc] = mpi_host_rank[iproc];
   }
   }

   //find specific host if available
   mpi_root_id = 0;
   for (iproc = 0; iproc < mpi_nprocs; iproc++)
   {
   if (strcasecmp(mpi_host[iproc], "nodep140") == 0)
   {
   mpi_root_id = iproc;
   break;
   }
   }

   BUG(mpi_nprocs < 1)
   BUG(mpi_nhosts < 1)

   my_printf("hostname=%s\n", hostname);
   my_printf("mpi_nprocs=%d\n", mpi_nprocs);
   my_printf("mpi_id=%d\n", mpi_id);
   for (iproc = 0; iproc < mpi_nprocs; iproc++)
   my_printf("iproc=%d host=%s\n", iproc, mpi_host[iproc]);
   my_printf("mpi_nhosts=%d\n", mpi_nhosts);
   for (iproc = 0; iproc < mpi_nprocs; iproc++)
   my_printf("iproc=%d host_rank=%d\n", iproc, mpi_host_rank[iproc]);
   my_printf("mpi_root_id=%d host=%s host rank=%d\n",
   mpi_root_id, mpi_host[mpi_root_id], mpi_host_rank[mpi_root_id]);
}




Re: [OMPI users] How to know which task on which node

2009-01-19 Thread kmuriki


Hi Gaurav,

Try using the -display-map option with mpirun command.
I use it and it gives me the nodes listing along with
MPI tasks running on those nodes.

--Krishna.

On Mon, 19 Jan 2009, gaurav gupta wrote:


Hello,

I want to know that which task is running on which node. Is there any way to
know this.
Is there any profiling tool provided along with openmpi to calculate time
taken in various steps.

--
GAURAV GUPTA
B.Tech III Yr. , Department of Computer Science & Engineering
IT BHU , Varanasi
Contacts
Phone No: +91-99569-49491

e-mail :
gaurav.gu...@acm.org
gaurav.gupta.cs...@itbhu.ac.in
1989.gau...@gmail.com



Re: [OMPI users] How to know which task on which node

2009-01-19 Thread Ashley Pittman
On Mon, 2009-01-19 at 12:50 +0530, gaurav gupta wrote:
> Hello,
> 
> I want to know that which task is running on which node. Is there any
> way to know this. 

>From where?  From the command line outside of a running job then the new
open-ps command in v1.3 will give you this information.  In 1.2 it's a
little more difficult to get at IIRC.

Ashley,



[OMPI users] Problem compiling open mpi 1.3 with sunstudio12 express

2009-01-19 Thread Olivier Marsden

Hello,

I'm trying to compile ompi 1.3rc7 with the sun studio express comilers.

I'm using the following configure command:

CC=/opt/sun/express/sunstudioceres/bin/cc 
CXX=/opt/sun/express/sunstudioceres/bin/CC   
F77=/opt/sun/express/sunstudioceres/bin/f77 
FC=/opt/sun/express/sunstudioceres/bin/f90  ./configure 
--prefix=/opt/mpi_sun --enable-heterogeneous  --enable-shared 
--enable-mpi-f90 --with-mpi-f90-size=small --disable-mpi-threads 
--disable-progress-threads --disable-debug  --without-udapl 
--disable-io-romio


The build and install execute correctly. However, I get the following 
when trying to use mpif90:

>> /opt/mpi_sun/bin/mpif90
gfortran: no input files

My /opt/mpi_sun/share/openmpi/mpif90-wrapper-data.txt file  appears to 
my layman eye to be correct, but just

in case, its contents is the following:

project=Open MPI
project_short=OMPI
version=1.3rc7
language=Fortran 90
compiler_env=FC
compiler_flags_env=FCFLAGS
compiler=/opt/sun/express/sunstudioceres/bin/f90
module_option=-M
extra_includes=
preprocessor_flags=
compiler_flags=
linker_flags=
libs=-lmpi_f90 -lmpi_f77 -lmpi -lopen-rte -lopen-pal   -ldl   
-Wl,--export-dynamic -lnsl -lutil -lm -ldl

required_file=
includedir=${includedir}
libdir=${libdir}


Can anyone see why gfortran is being used? (the config.log says that sun 
f90 is used )


Thanks,

Olivier




Re: [OMPI users] Problem compiling open mpi 1.3 with sunstudio12 express

2009-01-19 Thread Douglas Guptill
When I use the Intel compilers, I have to add to my PATH and
LD_LIBRARY_PATH before using "mpif90".  I wonder if this needs to be
done in your case?

Douglas.

On Mon, Jan 19, 2009 at 05:49:53PM +0100, Olivier Marsden wrote:
> Hello,
> 
> I'm trying to compile ompi 1.3rc7 with the sun studio express comilers.
> 
> I'm using the following configure command:
> 
> CC=/opt/sun/express/sunstudioceres/bin/cc 
> CXX=/opt/sun/express/sunstudioceres/bin/CC   
> F77=/opt/sun/express/sunstudioceres/bin/f77 
> FC=/opt/sun/express/sunstudioceres/bin/f90  ./configure 
> --prefix=/opt/mpi_sun --enable-heterogeneous  --enable-shared 
> --enable-mpi-f90 --with-mpi-f90-size=small --disable-mpi-threads 
> --disable-progress-threads --disable-debug  --without-udapl 
> --disable-io-romio
> 
> The build and install execute correctly. However, I get the following 
> when trying to use mpif90:
> >> /opt/mpi_sun/bin/mpif90
> gfortran: no input files
> 
> My /opt/mpi_sun/share/openmpi/mpif90-wrapper-data.txt file  appears to 
> my layman eye to be correct, but just
> in case, its contents is the following:
> 
> project=Open MPI
> project_short=OMPI
> version=1.3rc7
> language=Fortran 90
> compiler_env=FC
> compiler_flags_env=FCFLAGS
> compiler=/opt/sun/express/sunstudioceres/bin/f90
> module_option=-M
> extra_includes=
> preprocessor_flags=
> compiler_flags=
> linker_flags=
> libs=-lmpi_f90 -lmpi_f77 -lmpi -lopen-rte -lopen-pal   -ldl   
> -Wl,--export-dynamic -lnsl -lutil -lm -ldl
> required_file=
> includedir=${includedir}
> libdir=${libdir}
> 
> 
> Can anyone see why gfortran is being used? (the config.log says that sun 
> f90 is used )
> 
> Thanks,
> 
> Olivier



Re: [OMPI users] Problem compiling open mpi 1.3 with sunstudio12 express

2009-01-19 Thread Olivier Marsden

Thanks for the suggestion. Unfortunately I had already tried that
to no  avail.

Olivier

Douglas Guptill wrote:

When I use the Intel compilers, I have to add to my PATH and
LD_LIBRARY_PATH before using "mpif90".  I wonder if this needs to be
done in your case?

Douglas.

On Mon, Jan 19, 2009 at 05:49:53PM +0100, Olivier Marsden wrote:
  

Hello,

I'm trying to compile ompi 1.3rc7 with the sun studio express comilers.

I'm using the following configure command:

CC=/opt/sun/express/sunstudioceres/bin/cc 
CXX=/opt/sun/express/sunstudioceres/bin/CC   
F77=/opt/sun/express/sunstudioceres/bin/f77 
FC=/opt/sun/express/sunstudioceres/bin/f90  ./configure 
--prefix=/opt/mpi_sun --enable-heterogeneous  --enable-shared 
--enable-mpi-f90 --with-mpi-f90-size=small --disable-mpi-threads 
--disable-progress-threads --disable-debug  --without-udapl 
--disable-io-romio


The build and install execute correctly. However, I get the following 
when trying to use mpif90:


/opt/mpi_sun/bin/mpif90


gfortran: no input files

My /opt/mpi_sun/share/openmpi/mpif90-wrapper-data.txt file  appears to 
my layman eye to be correct, but just

in case, its contents is the following:

project=Open MPI
project_short=OMPI
version=1.3rc7
language=Fortran 90
compiler_env=FC
compiler_flags_env=FCFLAGS
compiler=/opt/sun/express/sunstudioceres/bin/f90
module_option=-M
extra_includes=
preprocessor_flags=
compiler_flags=
linker_flags=
libs=-lmpi_f90 -lmpi_f77 -lmpi -lopen-rte -lopen-pal   -ldl   
-Wl,--export-dynamic -lnsl -lutil -lm -ldl

required_file=
includedir=${includedir}
libdir=${libdir}


Can anyone see why gfortran is being used? (the config.log says that sun 
f90 is used )


Thanks,

Olivier



___
users mailing list
us...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/users

  




[OMPI users] Announcing the release of Open MPI version 1.3

2009-01-19 Thread Tim Mattox
The Open MPI Team, representing a consortium of research, academic,
and industry partners, is pleased to announce the release of Open MPI
version 1.3. This release contains many bug fixes, feature
enhancements, and performance improvements over the v1.2 series,
including (but not limited to):

   * MPI2.1 compliant
   * New Notifier framework
   * Additional architectures, OS's and batch schedulers
   * Improved thread safety
   * MPI_REAL16 and MPI_COMPLEX32
   * Improved MPI C++ bindings
   * Valgrind support
   * Updated ROMIO to the version from MPICH2-1.0.7
   * Improved Scalability
 - Process launch times reduced by an order of magnitude
 - sparse groups
 - On-demand connection setup
   * Improved point-to-point latencies
   * Better adaptive algorithms for multi-rail support
   * Additional collective algorithms; improved collective performance
   * Numerous enhancements for OpenFabrics
   * iWARP support
   * Fault Tolerance
 - coordinated checkpoint/restart
 - support for BLCR and self
   * Finer grained resource control and mapping (cores, HCAs, etc)
   * Many other new runtime features
   * Numerous bug fixes

Version 1.3 can be downloaded from the main Open MPI web site or any
of its mirrors (mirrors will be updating shortly).

We strongly recommend that all users upgrade to version 1.3 if possible.

Here are a list of some of the changes in v1.3 as compared the 1.2 series:

- Fixed deadlock issues under heavy messaging scenarios
- Extended the OS X 10.5.x (Leopard) workaround for a problem when
  assembly code is compiled with -g[0-9].  Thanks to Barry Smith for
  reporting the problem.  See ticket #1701.
- Disabled MPI_REAL16 and MPI_COMPLEX32 support on platforms where the
  bit representation of REAL*16 is different than that of the C type
  of the same size (usually long double).  Thanks to Julien Devriendt
  for reporting the issue.  See ticket #1603.
- Increased the size of MPI_MAX_PORT_NAME to 1024 from 36. See ticket #1533.
- Added "notify debugger on abort" feature. See tickets #1509 and #1510.
  Thanks to Seppo Sahrakropi for the bug report.
- Upgraded Open MPI tarballs to use Autoconf 2.63, Automake 1.10.1,
  Libtool 2.2.6a.
- Added missing MPI::Comm::Call_errhandler() function.  Thanks to Dave
  Goodell for bringing this to our attention.
- Increased MPI_SUBVERSION value in mpi.h to 1 (i.e., MPI 2.1).
- Changed behavior of MPI_GRAPH_CREATE, MPI_TOPO_CREATE, and several
  other topology functions per MPI-2.1.
- Fix the type of the C++ constant MPI::IN_PLACE.
- Various enhancements to the openib BTL:
  - Added btl_openib_if_[in|ex]clude MCA parameters for
including/excluding comma-delimited lists of HCAs and ports.
  - Added RDMA CM support, including btl_openib_cpc_[in|ex]clude MCA
parameters
  - Added NUMA support to only use "near" network adapters
  - Added "Bucket SRQ" (BSRQ) support to better utilize registered
memory, including btl_openib_receive_queues MCA parameter
  - Added ConnectX XRC support (and integrated with BSRQ)
  - Added btl_openib_ib_max_inline_data MCA parameter
  - Added iWARP support
  - Revamped flow control mechanisms to be more efficient
  - "mpi_leave_pinned=1" is now the default when possible,
automatically improving performance for large messages when
application buffers are re-used
- Eliminated duplicated error messages when multiple MPI processes fail
  with the same error.
- Added NUMA support to the shared memory BTL.
- Add Valgrind-based memory checking for MPI-semantic checks.
- Add support for some optional Fortran datatypes (MPI_LOGICAL1,
  MPI_LOGICAL2, MPI_LOGICAL4 and MPI_LOGICAL8).
- Remove the use of the STL from the C++ bindings.
- Added support for Platform/LSF job launchers.  Must be Platform LSF
  v7.0.2 or later.
- Updated ROMIO with the version from MPICH2 1.0.7.
- Added RDMA capable one-sided component (called rdma), which
  can be used with BTL components that expose a full one-sided
  interface.
- Added the optional datatype MPI_REAL2. As this is added to the "end of"
  predefined datatypes in the fortran header files, there will not be
  any compatibility issues.
- Added Portable Linux Processor Affinity (PLPA) for Linux.
- Addition of a finer symbols export control via the visibility feature
  offered by some compilers.
- Added checkpoint/restart process fault tolerance support. Initially
  support a LAM/MPI-like protocol.
- Removed "mvapi" BTL; all InfiniBand support now uses the OpenFabrics
  driver stacks ("openib" BTL).
- Added more stringent MPI API parameter checking to help user-level
  debugging.
- The ptmalloc2 memory manager component is now by default built as
  a standalone library named libopenmpi-malloc.  Users wanting to
  use leave_pinned with ptmalloc2 will now need to link the library
  into their application explicitly.  All other users will use the
  libc-provided allocator instead of Open MPI's ptmalloc2.  This change
  may be overridden with the configure option enable-