Re: [OMPI users] MPI and C++

2009-07-04 Thread Jeff Squyres

On Jul 3, 2009, at 7:42 PM, Dorian Krause wrote:

I would discourage you to use the C++ bindings, since (to my  
knowledge)

they might be removed from MPI 3.0 (there is such a proposal).




There is a proposal that has passed one vote so far to deprecate the C+ 
+ bindings in MPI-2.2 (meaning: still have them, but advise against  
using them).  This opens the door for potentially removing the C++  
bindings in MPI-3.0.


As has been mentioned on this thread already, the official MPI C++  
bindings are fairly simplistic -- they take advantage of a few  
language features, but not a lot.  They are effectively a 1-to-1  
mapping to the C bindings.  The Boost.MPI library added quite a few  
nice C++-friendly abstractions on top of MPI.  But if Boost is  
unattractive for you, then your best bet is probably just to use the C  
bindings.


--
Jeff Squyres
Cisco Systems



Re: [OMPI users] Network Problem?

2009-07-04 Thread Jeff Squyres

Open MPI does not currently support NAT; sorry.  :-(

On Jun 30, 2009, at 2:49 PM, David Ronis wrote:


(This may be a duplicate.  An earlier post seems to have been lost).

I'm using openmpi (1.3.2) to run on 3 dual processor machines (running
linux, slackware-12.1, gcc-4.4.0).  Two are directly on my LAN while
the 3rd is connected to my LAN via VPN and NAT (I can communicate in
either direction from any of the machines to the remote machines using
its NAT address).

The program I'm trying to run is very simple in terms of MPI.
Basically it is:

main()
{
[snip];

  MPI_Init(&argc,&argv);
  MPI_Comm_size(MPI_COMM_WORLD,&numprocs);
  MPI_Comm_rank(MPI_COMM_WORLD,&myrank);

[snip];

  if(myrank==0)
i=MPI_Reduce(MPI_IN_PLACE, C, N, MPI_DOUBLE,
 MPI_SUM, 0, MPI_COMM_WORLD);
  else
i=MPI_Reduce(C, MPI_IN_PLACE, N, MPI_DOUBLE,
 MPI_SUM, 0, MPI_COMM_WORLD);

  if(i!=MPI_SUCCESS)
{

  fprintf(stderr,"MPI_Reduce (C) fails on processor %d\n",  
myrank);

  MPI_Finalize();
  exit(1);
}
  MPI_Barrier(MPI_COMM_WORLD);


[snip];

}

I run by invoking:

mpirun -v -np ${NPROC} -hostfile ${HOSTFILE} --stdin none $*
> /dev/null

If I run on the 4 nodes that are physically on the LAN it works as
expected.  When I add the nodes on the remote machine things don't
work properly:

1.  If I start with NPROC=6 on one of the LAN machines all 6 nodes
start (as shown by running ps), and all get to the MPI_HARVEST
calls. At that point things hang (I see no network traffic, which
given the size of the array I'm trying to reduce is strange).

2.  If I start on the remote with NPROC=6, the only the mpirun call
shows up under ps on the remote, while nothing shows up on the other
nodes.  Killing the process gives messages like:

 hostname - daemon did not report back when launched

3.  If I start on the remote with NPROC=2, the 2 processes start on
the remote and finish properly.

My suspicion is that there's some bad interaction with NAT and
authentication.

Any suggestions?

David



___
users mailing list
us...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/users




--
Jeff Squyres
Cisco Systems



Re: [OMPI users] MPI and C++

2009-07-04 Thread Robert Kubrick


On Jul 4, 2009, at 8:24 AM, Jeff Squyres wrote:


On Jul 3, 2009, at 7:42 PM, Dorian Krause wrote:

I would discourage you to use the C++ bindings, since (to my  
knowledge)

they might be removed from MPI 3.0 (there is such a proposal).




There is a proposal that has passed one vote so far to deprecate  
the C++ bindings in MPI-2.2 (meaning: still have them, but advise  
against using them).  This opens the door for potentially removing  
the C++ bindings in MPI-3.0.



Is it the reason for this to boost the 'boost' library adoption?




As has been mentioned on this thread already, the official MPI C++  
bindings are fairly simplistic -- they take advantage of a few  
language features, but not a lot.  They are effectively a 1-to-1  
mapping to the C bindings.  The Boost.MPI library added quite a few  
nice C++-friendly abstractions on top of MPI.  But if Boost is  
unattractive for you, then your best bet is probably just to use  
the C bindings.


--
Jeff Squyres
Cisco Systems

___
users mailing list
us...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/users




Re: [OMPI users] MPI and C++

2009-07-04 Thread Luis Vitorio Cargnini

Thanks Jeff.

Le 09-07-04 à 08:24, Jeff Squyres a écrit :

There is a proposal that has passed one vote so far to deprecate the  
C++ bindings in MPI-2.2 (meaning: still have them, but advise  
against using them).  This opens the door for potentially removing  
the C++ bindings in MPI-3.0.


As has been mentioned on this thread already, the official MPI C++  
bindings are fairly simplistic -- they take advantage of a few  
language features, but not a lot.  They are effectively a 1-to-1  
mapping to the C bindings.  The Boost.MPI library added quite a few  
nice C++-friendly abstractions on top of MPI.  But if Boost is  
unattractive for you, then your best bet is probably just to use the  
C bindings.




smime.p7s
Description: S/MIME cryptographic signature


PGP.sig
Description: Ceci est une signature électronique PGP


[OMPI users] build-rpm

2009-07-04 Thread rahmani
hi every one
I built and rpm file for openmpi-1.3.2 with openmpi.spec and buildrpm.sh on the 
http://www.open-mpi.org/software/ompi/v1.3/srpm.php 

I change buildrpm.sh as fllowing:
prefix="/usr/local/openmpi/intel/1.3.2"
specfile="openmpi.spec"
#rpmbuild_options=${rpmbuild_options:-"--define 'mflags -j4'"}
# -j4 is an option to make, specifies the number of jobs (4) to run 
simultaneously.
rpmbuild_options="--define 'mflags -j4'"
#configure_options=${configure_options:-""}
configure_options="FC=/opt/intel/Compiler/11.0/069/bin/intel64/ifort 
F77=/opt/intel/Compiler/11.0/069/bin/intel64/ifort 
CC=/opt/intel/Compiler/11.0/069/bin/intel64/icc 
CXX=/opt/intel/Compiler/11.0/069/bin/intel64/icpc --with-sge 
--with-threads=posix --enable-mpi-threads"

# install ${prefix}/bin/mpivars.* scripts
rpmbuild_options=${rpmbuild_options}" --define 'install_in_opt 0' --define 
'install_shell_scripts 1' --define 'install_modulefile 0'"
# prefix variable has to be passed to rpmbuild
rpmbuild_options=${rpmbuild_options}" --define '_prefix ${prefix}'"


# Note that this script can build one or all of the following RPMs:
# SRPM, all-in-one, multiple.

# If you want to build the SRPM, put "yes" here
build_srpm=${build_srpm:-"no"}
# If you want to build the "all in one RPM", put "yes" here
build_single=${build_single:-"yes"}
# If you want to build the "multiple" RPMs, put "yes" here
build_multiple=${build_multiple:-"no"}

it create openmpi-1.3.2-1.x86_64.rpm  with no error, but when I install it with 
rpm -ivh I see:
error: Failed dependencies:
libifcoremt.so.5()(64bit) is needed by openmpi-1.3.2-1.x86_64
libifport.so.5()(64bit) is needed by openmpi-1.3.2-1.x86_64
libimf.so()(64bit) is needed by openmpi-1.3.2-1.x86_64
libintlc.so.5()(64bit) is needed by openmpi-1.3.2-1.x86_64
libiomp5.so()(64bit) is needed by openmpi-1.3.2-1.x86_64
libsvml.so()(64bit) is needed by openmpi-1.3.2-1.x86_64
libtorque.so.2()(64bit) is needed by openmpi-1.3.2-1.x86_64
but all above library are in my computer

I use rpm -ivh --nodeps and it install completely, but when I use mpif90 and 
mpirun I see:
  $ /usr/local/openmpi/intel/1.3.2/bin/mpif90
gfortran: no input files   (I compile with ifort)

  $ /usr/local/openmpi/intel/1.3.2/bin/mpirun
usr/local/openmpi/intel/1.3.2/bin/mpirun: symbol lookup error: 
/usr/local/openmpi/intel/1.3.2/bin/mpirun: undefined symbol: orted_cmd_line

What is wrong? 
Please help me