[OMPI users] openmpi-1.7.3rc2r29276 doesn't honour --with-wrapper-libs

2013-09-28 Thread Siegmar Gross
Hi,

I installed openmpi-1.7.3rc2r29276 on my platforms (Solaris Sparc,
Solaris x86_64, and Linux x86_64) with Sun C 5.12 and gcc-4.8.0 in
32- and 64-bit versions. On Solaris Sparc I configured with

  LIBS="-lgcc_s" \
  --with-wrapper-cflags="-std=c11 -m64" \
  --with-wrapper-libs="-lgcc_s" \

"-lgcc_s" is neccessary to build Open MPI and later compile MPI programs.

tyr openmpi-1.7.3rc2r29276-SunOS.sparc.64_gcc 19 
../openmpi-1.7.3rc2r29276/configure \
  --help | grep wrapper-libs
  --with-wrapper-libs Extra flags to add to LIBS when using wrapper


tyr openmpi-1.7.3rc2r29276-SunOS.sparc.64_gcc 20 more config.log
This file contains any messages produced by compilers while
running configure, to aid debugging if configure makes a mistake.

It was created by Open MPI configure 1.7.3rc2, which was
generated by GNU Autoconf 2.69.  Invocation command line was

  $ ../openmpi-1.7.3rc2r29276/configure --prefix=/usr/local/openmpi-1.7_64_gcc \
  --libdir=/usr/local/openmpi-1.7_64_gcc/lib64 \
  --with-jdk-bindir=/usr/local/jdk1.7.0_07/bin/sparcv9 \
  --with-jdk-headers=/usr/local/jdk1.7.0_07/include \
  JAVA_HOME=/usr/local/jdk1.7.0_07 LDFLAGS=-m64 \
  -L/usr/local/gcc-4.8.0/lib/sparcv9 LIBS=-lgcc_s \
  CC=gcc CXX=g++ FC=gfortran CFLAGS=-m64 CXXFLAGS=-m64 FCFLAGS=-m64 CPP=cpp \
  CXXCPP=cpp CPPFLAGS= CXXCPPFLAGS= --enable-cxx-exceptions --enable-mpi-java \
  --enable-heterogeneous --enable-opal-multi-threads 
--enable-mpi-thread-multiple \
  --with-threads=posix --with-hwloc=internal --without-verbs --without-udapl \
  --with-wrapper-cflags=-std=c11 -m64 --with-wrapper-libs=-lgcc_s --enable-debug
...


tyr hello_1 114 which mpicc
/usr/local/openmpi-1.7_64_gcc/bin/mpicc

tyr hello_1 115 ls -l /usr/local/openmpi-1.7_64_gcc/bin/mpicc
lrwxrwxrwx 1 root root 12 Sep 28 10:30 /usr/local/openmpi-1.7_64_gcc/bin/mpicc 
-> opal_wrapper

tyr hello_1 116 mpicc -showme
gcc -I/usr/local/openmpi-1.7_64_gcc/include -fexceptions -pthread -std=c11 -m64 
\
  -L/usr/local/openmpi-1.7_64_gcc/lib64 -lmpi


Unfortunately the wrapper compiler doesn't contain "-lgcc_s" so that I cannot 
compile
programs.

tyr hello_1 117 mpicc hello_1_mpi.c 
/usr/local/openmpi-1.7_64_gcc/lib64/libmpi.so: undefined reference to 
`__muldc3@GCC_4.0.0'
/usr/local/openmpi-1.7_64_gcc/lib64/libmpi.so: undefined reference to 
`__mulsc3@GCC_4.0.0'
/usr/local/openmpi-1.7_64_gcc/lib64/libmpi.so: undefined reference to 
`__multc3@GCC_4.0.0'
collect2: error: ld returned 1 exit status


I would be grateful if somebody can fix the problem. Perhaps you can even add
"-lgcc_s" as a default library to the build process and wrapper compiler.
Thank you very much for your help in advance.


Kind regards

Siegmar



[OMPI users] mpi_barrier

2013-09-28 Thread Huangwei
Dear All,

In my code I implement mpi_send/mpi_receive for an three dimensional real
array, and process is as follows:

all other processors send the array to rank 0 and then rank 0 receives the
array and put these arrays into a complete array. Then mpi_bcast is called
to send the complete array from rank 0 to all others.

This is very basic usage of mpi_send and mpi_receive. In my fortran code I
found that if I added call mpi_barrier(...) before the mpi_send and
mpi_reive statements the wall time (60s) for this sending and receiving
will be much lower than that if mpi_barrier is not called (2s). I used
mpi_wtime to count the time.

I think mpi_send and mpi_recv are blocking subroutines and thus no
additional mpi_barrier is needed. Can anybody tell me what is the reason
for this phenomena? Thank you very much.

best regards,
Huangwei


Re: [OMPI users] mpi_barrier

2013-09-28 Thread George Bosilca

On Sep 29, 2013, at 01:19 , Huangwei  wrote:

> Dear All, 
> 
> In my code I implement mpi_send/mpi_receive for an three dimensional real 
> array, and process is as follows:
> 
> all other processors send the array to rank 0 and then rank 0 receives the 
> array and put these arrays into a complete array. Then mpi_bcast is called to 
> send the complete array from rank 0 to all others. 

This pattern of communication reminds me of an MPI_Allgather (or the more 
flexible version MPI_Allgatherv). 

> This is very basic usage of mpi_send and mpi_receive. In my fortran code I 
> found that if I added call mpi_barrier(...) before the mpi_send and 
> mpi_receive statements the wall time (60s) for this sending and receiving 
> will be much lower than that if mpi_barrier is not called (2s). I used 
> mpi_wtime to count the time. 

In a parallel application each process is out of sync to the others. I have no 
idea how you measure your time in the original version but I guess that in the 
MPI_Barrier case you start your timer after the barrier. As the barrier put in 
sync all processes, you only measure the real time to exchange the data, which 
might seem shorter.

> I think mpi_send and mpi_recv are blocking subroutines and thus no additional 
> mpi_barrier is needed. Can anybody tell me what is the reason for this 
> phenomena? Thank you very much. 

Yes, these operations are indeed blocking, which is why you see the slowdown. 
If one single process is late to send its contribution, the entire operation is 
be penalized (as the root , aka. process zero, is waiting for contributions in 
order). So you should either try to use the collective pattern I highlighted 
before, switch to using non-blocking point-to-point instead of blocking, or 
look into the potential benefit of using a non-blocking collective.

  George.

>  
> best regards,
> Huangwei
> 
>  
> 
>  
> ___
> users mailing list
> us...@open-mpi.org
> http://www.open-mpi.org/mailman/listinfo.cgi/users