Re: [OMPI users] MPI_IN_PLACE in a call to MPI_Allreduce in Fortran

2013-09-07 Thread Tom Rosmond
I'm afraid I can't answer that.   Here's my environment:

 OpenMPI 1.6.1
 IFORT 12.0.3.174
 Scientific Linux 6.4

 What fortran compiler are you using?  

T. Rosmond



On Fri, 2013-09-06 at 23:14 -0400, Hugo Gagnon wrote:
> Thanks for the input but it still doesn't work for me...  Here's the
> version without MPI_IN_PLACE that does work:
> 
> program test
> use mpi
> integer :: ierr, myrank, a(2), a_loc(2) = 0
> call MPI_Init(ierr)
> call MPI_Comm_rank(MPI_COMM_WORLD,myrank,ierr)
> if (myrank == 0) then
>   a_loc(1) = 1
>   a_loc(2) = 2
> else
>   a_loc(1) = 3
>   a_loc(2) = 4
> endif
> call MPI_Allreduce(a_loc,a,2,MPI_INTEGER,MPI_SUM,MPI_COMM_WORLD,ierr)
> write(*,*) myrank, a(:)
> call MPI_Finalize(ierr)
> end program test
> 
> $ openmpif90 test.f90
> $ openmpirun -np 2 a.out
>0   4   6
>1   4   6
> 
> Now I'd be curious to know why your OpenMPI implementation handles
> MPI_IN_PLACE correctly and not mine!
> 




Re: [OMPI users] MPI_IN_PLACE in a call to MPI_Allreduce in Fortran

2013-09-07 Thread Tom Rosmond
Just as an experiment, try replacing

use mpi

  with

include 'mpif.h'

If that fixes the problem, you can confront the  OpenMPI experts

T. Rosmond



On Fri, 2013-09-06 at 23:14 -0400, Hugo Gagnon wrote:
> Thanks for the input but it still doesn't work for me...  Here's the
> version without MPI_IN_PLACE that does work:
> 
> program test
> use mpi
> integer :: ierr, myrank, a(2), a_loc(2) = 0
> call MPI_Init(ierr)
> call MPI_Comm_rank(MPI_COMM_WORLD,myrank,ierr)
> if (myrank == 0) then
>   a_loc(1) = 1
>   a_loc(2) = 2
> else
>   a_loc(1) = 3
>   a_loc(2) = 4
> endif
> call MPI_Allreduce(a_loc,a,2,MPI_INTEGER,MPI_SUM,MPI_COMM_WORLD,ierr)
> write(*,*) myrank, a(:)
> call MPI_Finalize(ierr)
> end program test
> 
> $ openmpif90 test.f90
> $ openmpirun -np 2 a.out
>0   4   6
>1   4   6
> 
> Now I'd be curious to know why your OpenMPI implementation handles
> MPI_IN_PLACE correctly and not mine!
> 




Re: [OMPI users] MPI_IN_PLACE in a call to MPI_Allreduce in Fortran

2013-09-07 Thread Hugo Gagnon
Nope, no luck.  My environment is:

OpenMPI 1.6.5
gcc 4.8.1
Mac OS 10.8

I found a ticket reporting a similar problem on OS X:

https://svn.open-mpi.org/trac/ompi/ticket/1982

It said to make sure $prefix/share/ompi/mpif90-wrapper-data.txt had the
following line:

compiler_flags=-Wl,-commons,use_dylibs

I checked mine and it does (I even tried to include it explicitly on the
command line but without success), what should I do next?

-- 
  Hugo Gagnon

On Sat, Sep 7, 2013, at 0:39, Tom Rosmond wrote:
> Just as an experiment, try replacing
> 
> use mpi
> 
>   with
> 
> include 'mpif.h'
> 
> If that fixes the problem, you can confront the  OpenMPI experts
> 
> T. Rosmond
> 
> 
> 
> On Fri, 2013-09-06 at 23:14 -0400, Hugo Gagnon wrote:
> > Thanks for the input but it still doesn't work for me...  Here's the
> > version without MPI_IN_PLACE that does work:
> > 
> > program test
> > use mpi
> > integer :: ierr, myrank, a(2), a_loc(2) = 0
> > call MPI_Init(ierr)
> > call MPI_Comm_rank(MPI_COMM_WORLD,myrank,ierr)
> > if (myrank == 0) then
> >   a_loc(1) = 1
> >   a_loc(2) = 2
> > else
> >   a_loc(1) = 3
> >   a_loc(2) = 4
> > endif
> > call MPI_Allreduce(a_loc,a,2,MPI_INTEGER,MPI_SUM,MPI_COMM_WORLD,ierr)
> > write(*,*) myrank, a(:)
> > call MPI_Finalize(ierr)
> > end program test
> > 
> > $ openmpif90 test.f90
> > $ openmpirun -np 2 a.out
> >0   4   6
> >1   4   6
> > 
> > Now I'd be curious to know why your OpenMPI implementation handles
> > MPI_IN_PLACE correctly and not mine!
> > 
> 
> 
> ___
> users mailing list
> us...@open-mpi.org
> http://www.open-mpi.org/mailman/listinfo.cgi/users


Re: [OMPI users] MPI_IN_PLACE in a call to MPI_Allreduce in Fortran

2013-09-07 Thread Tom Rosmond
What Fortran compiler is your OpenMPI build with?  Some fortran's don't
understand MPI_IN_PLACE.  Do a 'fortran MPI_IN_PLACE' search to see
several instances.

T. Rosmond



On Sat, 2013-09-07 at 10:16 -0400, Hugo Gagnon wrote:
> Nope, no luck.  My environment is:
> 
> OpenMPI 1.6.5
> gcc 4.8.1
> Mac OS 10.8
> 
> I found a ticket reporting a similar problem on OS X:
> 
> https://svn.open-mpi.org/trac/ompi/ticket/1982
> 
> It said to make sure $prefix/share/ompi/mpif90-wrapper-data.txt had the
> following line:
> 
> compiler_flags=-Wl,-commons,use_dylibs
> 
> I checked mine and it does (I even tried to include it explicitly on the
> command line but without success), what should I do next?
> 




[OMPI users] linker library file for both fortran and c compilers

2013-09-07 Thread basma a . azeem
sorry for the trivial question
i am new to open mpi and parallel computing

i installed openmpi-1.6.1 on my pc which has an ubuntu 12.10
also i have nas parallel benchmark , i need to edit the NPB make file 
"make.def" 
i need to know what is the linker library file for both fortran and c compilers 
and where i can find them in the build folder ( i think i should find them in 
lib folder )
it is F_LIB   and   C_LIB that are required
this is the NPB make file:


#---
#
#SITE- AND/OR PLATFORM-SPECIFIC DEFINITIONS. 
#
#---

#---
# Items in this file will need to be changed for each platform.
#---

#---
# Parallel Fortran:
#
# For CG, EP, FT, MG, LU, SP, BT and UA, which are in Fortran, the following 
# must be defined:
#
# F77- Fortran compiler
# FFLAGS - Fortran compilation arguments
# F_INC  - any -I arguments required for compiling Fortran 
# FLINK  - Fortran linker
# FLINKFLAGS - Fortran linker arguments
# F_LIB  - any -L and -l arguments required for linking Fortran 
# 
# compilations are done with $(F77) $(F_INC) $(FFLAGS) or
#$(F77) $(FFLAGS)
# linking is done with   $(FLINK) $(F_LIB) $(FLINKFLAGS)
#---

#---
# This is the fortran compiler used for Fortran programs
#---
F77 = f77
# This links fortran programs; usually the same as ${F77}
FLINK= $(F77)

#---
# These macros are passed to the linker 
#---
F_LIB  =

#---
# These macros are passed to the compiler 
#---
F_INC =

#---
# Global *compile time* flags for Fortran programs
#---
FFLAGS= -O

#---
# Global *link time* flags. Flags for increasing maximum executable 
# size usually go here. 
#---
FLINKFLAGS = -O


#---
# Parallel C:
#
# For IS and DC, which are in C, the following must be defined:
#
# CC - C compiler 
# CFLAGS - C compilation arguments
# C_INC  - any -I arguments required for compiling C 
# CLINK  - C linker
# CLINKFLAGS - C linker flags
# C_LIB  - any -L and -l arguments required for linking C 
#
# compilations are done with $(CC) $(C_INC) $(CFLAGS) or
#$(CC) $(CFLAGS)
# linking is done with   $(CLINK) $(C_LIB) $(CLINKFLAGS)
#---

#---
# This is the C compiler used for C programs
#---
CC = cc
# This links C programs; usually the same as ${CC}
CLINK= $(CC)

#---
# These macros are passed to the linker 
#---
C_LIB  = -lm

#---
# These macros are passed to the compiler 
#---
C_INC =

#---
# Global *compile time* flags for C programs
# DC inspects the following flags (preceded by "-D"):
#
# IN_CORE - computes all views and checksums in main memory (if there is 
# enough memory)
#
# VIEW_FILE_OUTPUT - forces DC to write the generated views to disk
#
# OPTIMIZATION - turns on some nonstandard DC optimizations
#
# _FILE_OFFSET_BITS=64 
# _LARGEFILE64_SOURCE - are standard compiler flags which allow to work with 
# files larger than 2GB.
#---
CFLAGS= -O

#---
# Global *link time* flags. Flags for increasing maximum executable 
# size usually go here. 
#