[OMPI users] OpenMPI 1.8.0 + PGI 13.6 = undeclared variable __LDBL_MANT_DIG__

2014-04-06 Thread Filippo Spiga
Dear all,
  I am trying to compile Open MPI 1.8 with PGI 13.6. I am using on purpose this 
old version of PGI because of results and performance comparisons with old 
benchmarks. I run configure in this way:

./configure  CC=pgcc CXX=pgCC FC=pgf90 F90=pgfortran CFLAGS="-noswitcherror" 
FCFLAGS="-noswitcherror" CXXFLAGS="-noswitcherror" 
--prefix=/usr/local/Cluster-Users/fs395/openmpi-1.8.0/pgi-13.6 
--with-hwloc=internal --enable-mca-no-build=btl-usnic --with-verbs


and here where configure got stuck:

make[2]: Entering directory 
`/home/fs395/archive/openmpi-1.8/build/opal/datatype'
  CCLD libdatatype_reliable.la
  CC   opal_convertor.lo
PGC-S-0039-Use of undeclared variable __LDBL_MANT_DIG__ 
(../../../opal/util/arch.h: 268)
PGC/x86 Linux 13.6-0: compilation completed with severe errors
make[2]: *** [opal_convertor.lo] Error 1
make[2]: Leaving directory `/home/fs395/archive/openmpi-1.8/build/opal/datatype'
make[1]: *** [install-recursive] Error 1
make[1]: Leaving directory `/home/fs395/archive/openmpi-1.8/build/opal'
make: *** [install-recursive] Error 1


Any suggestions? Googling does not help much...

F

--
Mr. Filippo SPIGA, M.Sc.
http://www.linkedin.com/in/filippospiga ~ skype: filippo.spiga

«Nobody will drive us out of Cantor's paradise.» ~ David Hilbert

*
Disclaimer: "Please note this message and any attachments are CONFIDENTIAL and 
may be privileged or otherwise protected from disclosure. The contents are not 
to be disclosed to anyone other than the addressee. Unauthorized recipients are 
requested to preserve this confidentiality and to advise the sender immediately 
of any error in transmission."




[OMPI users] Openmpi 1.8 "rmaps seq" doesn't work

2014-04-06 Thread Chen Bill
Hi ,

I just tried the openmpi 1.8, but I found the feature --mca rmaps seq
doesn't work.

for example,

>mpirun -np 4 -hostfile hostsfle --mca rmaps seq hostname

It shows below error,

--
Your job failed to map. Either no mapper was available, or none
of the available mappers was able to perform the requested
mapping operation. This can happen if you request a map type
(e.g., loadbalance) and the corresponding mapper was not built.
--

but when I use ompi_info ,it shows has this feature


>ompi_info |grep -i rmaps
   MCA rmaps: lama (MCA v2.0, API v2.0, Component v1.8)
   MCA rmaps: mindist (MCA v2.0, API v2.0, Component v1.8)
   MCA rmaps: ppr (MCA v2.0, API v2.0, Component v1.8)
   MCA rmaps: rank_file (MCA v2.0, API v2.0, Component v1.8)
   MCA rmaps: resilient (MCA v2.0, API v2.0, Component v1.8)
   MCA rmaps: round_robin (MCA v2.0, API v2.0, Component v1.8)
  * MCA rmaps: seq (MCA v2.0, API v2.0, Component v1.8)*
   MCA rmaps: staged (MCA v2.0, API v2.0, Component v1.8)

Any suggestions?

Many thanks,
Bill


Re: [OMPI users] performance of MPI_Iallgatherv

2014-04-06 Thread Zehan Cui
Hi Matthieu,

Thanks for your suggestion. I tried MPI_Waitall(), but the results are
the same. It seems the communication didn't overlap with computation.

Regards,
Zehan

On 4/5/14, Matthieu Brucher  wrote:
> Hi,
>
> Try waiting on all gathers at the same time, not one by one (this is
> what non blocking collectives are made for!)
>
> Cheers,
>
> Matthieu
>
> 2014-04-05 10:35 GMT+01:00 Zehan Cui :
>> Hi,
>>
>> I'm testing the non-blocking collective of OpenMPI-1.8.
>>
>> I have two nodes with Infiniband to perform allgather on totally 128MB
>> data.
>>
>> I split the 128MB data into eight pieces, and perform computation and
>> MPI_Iallgatherv() on one piece of data each iteration, hoping that the
>> MPI_Iallgatherv() of last iteration can be overlapped with computation of
>> current iteration. A MPI_Wait() is called at the end of last iteration.
>>
>> However, the total communication time (including the final wait time) is
>> similar with that of the traditional blocking MPI_Allgatherv, even
>> slightly
>> higher.
>>
>>
>> Following is the test pseudo-code, the source code are attached.
>>
>> ===
>>
>> Using MPI_Allgatherv:
>>
>> for( i=0; i<8; i++ )
>> {
>>   // computation
>> mytime( t_begin );
>> computation;
>> mytime( t_end );
>> comp_time += (t_end - t_begin);
>>
>>   // communication
>> t_begin = t_end;
>> MPI_Allgatherv();
>> mytime( t_end );
>> comm_time += (t_end - t_begin);
>> }
>> 
>>
>> Using MPI_Iallgatherv:
>>
>> for( i=0; i<8; i++ )
>> {
>>   // computation
>> mytime( t_begin );
>> computation;
>> mytime( t_end );
>> comp_time += (t_end - t_begin);
>>
>>   // communication
>> t_begin = t_end;
>> MPI_Iallgatherv();
>> mytime( t_end );
>> comm_time += (t_end - t_begin);
>> }
>>
>> // wait for non-blocking allgather to complete
>> mytime( t_begin );
>> for( i=0; i<8; i++ )
>> MPI_Wait;
>> mytime( t_end );
>> wait_time = t_end - t_begin;
>>
>> ==
>>
>> The results of Allgatherv is:
>> [cmy@gnode102 test_nbc]$ /home3/cmy/czh/opt/ompi-1.8/bin/mpirun -n 2
>> --host
>> gnode102,gnode103 ./Allgatherv 128 2 | grep time
>> Computation time  : 8481279 us
>> Communication time: 319803 us
>>
>> The results of Iallgatherv is:
>> [cmy@gnode102 test_nbc]$ /home3/cmy/czh/opt/ompi-1.8/bin/mpirun -n 2
>> --host
>> gnode102,gnode103 ./Iallgatherv 128 2 | grep time
>> Computation time  : 8479177 us
>> Communication time: 199046 us
>> Wait time:  139841 us
>>
>>
>> So, does this mean that current OpenMPI implementation of MPI_Iallgatherv
>> doesn't support offloading of collective communication to dedicated cores
>> or
>> network interface?
>>
>> Best regards,
>> Zehan
>>
>>
>>
>>
>>
>> ___
>> users mailing list
>> us...@open-mpi.org
>> http://www.open-mpi.org/mailman/listinfo.cgi/users
>
>
>
> --
> Information System Engineer, Ph.D.
> Blog: http://matt.eifelle.com
> LinkedIn: http://www.linkedin.com/in/matthieubrucher
> Music band: http://liliejay.com/
> ___
> users mailing list
> us...@open-mpi.org
> http://www.open-mpi.org/mailman/listinfo.cgi/users
>


-- 
Best Regards
Zehan Cui(崔泽汉)
---
Institute of Computing Technology, Chinese Academy of Sciences.
No.6 Kexueyuan South Road Zhongguancun,Haidian District Beijing,China