[OMPI users] multi-compiler builds of OpenMPI (RPM)

2007-12-31 Thread Jim Kusznir
Hi all:

I'm trying to set up a ROCKS cluster (CentOS 4.5) with OpenMPI and
GCC, PGI, and Intel compilers.  My understanding is that OpenMPI must
be compiled with each compiler.  The result (or at least, the runtime
libs) must be in .rpm format, as that is required by ROCKS compute
node deployment system.  I am also using environment modules to manage
users' environment and selecting which version of OpenMPI/compiler.

I have some questions, though.
1) am I correct in that OpenMPI needs to be complied with each
compiler that will be used with it?

I am currently trying to make rpms using the included .spec file
(contrib/dist/linux/openmpi.spec, IIRC).
2) How do I use it to build against different compilers and end up
with non-colliding namespaces, etc?
I am currently using the following command line:
rpmbuild -bb --define 'install_in_opt 1' --define 'install_modulefile
1' --define 'build_all_in_one_rpm 0' --define 'configure_options
--with-tm=/opt/torque' openmpi.spec
I am currently concerned with differentiating same version compiled
with different compilers.  I origionally changed the name (--define
'_name openmpi-gcc'), but this broke the final phases of rpm building:
 RPM build errors:
File not found:
/var/tmp/openmpi-gcc-1.2.4-1-root/opt/openmpi-gcc/1.2.4/share/openmpi-gcc
I tried changing the version with "gcc" appended, but that also broke,
and as I thought about it more, I thought that would likely induce
headaches later with rpm only letting one version installed, etc.

3) Will the resulting -runtime .rpms (for the different compiler
versions) coexist peacefully without any special environment munging
on the compute nodes, or do I need modules, etc. on all the compute
nodes as well?

4) I've never really used pgi or intel's compiler.  I saw notes in the
rpm about build flag problems and "use your normal optimizations and
flags", etc.  As I have no concept of "normal" for these compilers,
are there any guides or examples I should/could use for this?

And of course, I'd be grateful for any hints/tricks/etc that I didn't
ask for, as I probably still don't fully know what I'm getting into
herethere's a lot of firsts here

Thanks!
--Jim


[OMPI users] orte in persistent mode

2007-12-31 Thread Neeraj Chourasia
Dear All,    I am wondering if ORTE can be run in persistent 
mode. It has already been raised in Mailing list ( 
http://www.open-mpi.org/community/lists/users/2006/03/0939.php),  where it 
was said that the problem is still there. I just want to know, if its fixed or 
being fixed ?   Reason, why i am looking at is in large clusters, 
mpirun takes lot of time starting orted (by ssh) on remote nodes. If orte is 
already running, hopefully we can save considerable time. Any comments is 
appreciated. -Neeraj


Re: [OMPI users] No output from mpirun

2007-12-31 Thread Varun R
Yes, the 'mpirun' is the one from OpenMPI. And btw mpich worked perfectly
for me. It's only ompi that's giving me these problems. Do I have to setup
ssh or something? Because I remember doing that for mpich.

On Dec 31, 2007 4:15 AM, Reuti  wrote:

> Hi,
>
> Am 26.12.2007 um 10:08 schrieb Varun R:
>
> > I just installed Openmpi 1.2.2 on my new openSUSE 10.3 system. All
> > my programs(C++) compile well with 'mpic++' but when I run them
> > with 'mpirun' i get no output and I immediately get back the
> > prompt. I tried the options '--verbose' and got nothing. When I
> > tried '--debug-daemons' I get the following output:
> >
> > Daemon [0,0,1] checking in as pid 6308 on host suse-nigen
> > [suse-nigen:06308] [0,0,1] orted: received launch callback
> > [suse-nigen:06308] [0,0,1] orted_recv_pls: received message from
> > [0,0,0]
> > [suse-nigen:06308] [0,0,1] orted_recv_pls: received exit
> >
> >
> > Also when I simply run the executable without mpirun it gives the
> > right output. I also tried inserting a long 'for' loop in the
> > program to check if it's getting executed at all and as I suspected
> > mpirun still returns immediately to the prompt. Here's my program:
>
> is the mpirun the one from Open MPI?
>
> -- Reuti
>
> > #include 
> > #include 
> >
> > using namespace std;
> >
> > int main(int argc,char* argv[])
> > {
> > int rank,nproc;
> > cout<<"Before"< >
> > MPI_Init(&argc,&argv);
> > MPI_Comm_rank(MPI_COMM_WORLD,&rank);
> > MPI_Comm_size(MPI_COMM_WORLD,&nproc);
> > cout<<"Middle"< > MPI_Finalize();
> >
> > int a = 5;
> > for(int i=0; i< 10; i++)
> > for(int j=0; j<1; j++)
> > a += 4;
> >
> > if(rank == 0)
> > cout<<"Rank 0"< >
> > cout<<"Over"< >
> > return 0;
> > }
> >
> > I also tried version 1.2.4 but still no luck. Could someone please
> > tell me what could be wrong here?
> >
> > Thanks,
> > Varun
> >
> >
> > ___
> > users mailing list
> > us...@open-mpi.org
> > http://www.open-mpi.org/mailman/listinfo.cgi/users
>
> ___
> users mailing list
> us...@open-mpi.org
> http://www.open-mpi.org/mailman/listinfo.cgi/users
>


Re: [OMPI users] multi-compiler builds of OpenMPI (RPM)

2007-12-31 Thread pat . o'bryant
Jim,
I would start with this site "http://www.rpm.org/max-rpm/";. This site
gives a really good explanation of building packages using rpm. I built my
own spec file which gave me a better understanding of how RPMs work.

What I did with my installation was to set up an rpm that built
packages for both GNU and Intel compilers. Chapter 18 at the "max-rpm" site
explains how to do this. So here is the general layout of my spec file for
OpenMPI and two compilers. Note that the end result is one high level
directory with two subdirectories, each one for a different compiler. The
only down side to this process is that "LD_LIBRARY_PATH" must be used for
both building and running OpenMPI jobs.

Hope this helps,
Pat

%package intel
Summary: OpenMPI, Intel
Group: MPI software
# Disable check for dependencies
AutoReqProv: no <== without this option, an "rpm -i .."
leads to lots of unsatisfied "requires.."
.

%package gnu
Summary: OpenMPI, GNU
Group: MPI software
# Disable check for dependencies
AutoReqProv: no
.

# Note only 1 prep section allowed
%prep
rm -rf $RPM_BUILD_DIR/openmpi-1.2.4_intel
rm -rf $RPM_BUILD_DIR/openmpi-1.2.4_gnu
cd $RPM_BUILD_DIR

# create directory for intel build
tar zxvf $RPM_SOURCE_DIR/openmpi-1.2.4.tar.gz
mv openmpi-1.2.4 openmpi-1.2.4_intel<== this allows
multiple extractions of gz file within source directory

# create directory for gnu build
tar zxvf $RPM_SOURCE_DIR/openmpi-1.2.4.tar.gz
mv openmpi-1.2.4 openmpi-1.2.4_gnu<== this allows
multiple extractions of gz file within source directory
.
# Build Intel
cd $RPM_BUILD_DIR/openmpi-1.2.4_intel
./configure --prefix /usr/local/openmpi-1.2.4/intel
--with-openib=/usr/local/ofed \   <== upper level directory is
"/usr/local/openmpi-1.2.4"
--with-tm=/usr/local/pbs CC=icc CXX=icpc F77=ifort FC=ifort \
--with-threads=posix --enable-mpi-threads

# Build GNU
cd $RPM_BUILD_DIR/openmpi-1.2.4_gnu
./configure --prefix /usr/local/openmpi-1.2.4/gnu
--with-openib=/usr/local/ofed \
--with-tm=/usr/local/pbs \
--with-threads=posix --enable-mpi-threads

# Note only 1 install section allowed
%install
# Install Intel
rm -rf /usr/local/openmpi-1.2.4/intel
mkdir -p /usr/local/openmpi-1.2.4/intel
cd $RPM_BUILD_DIR/openmpi-1.2.4_intel <== unique
subdirectories from tar extract step above
make all install

# Install GNU
rm -rf /usr/local/openmpi-1.2.4/gnu
mkdir -p /usr/local/openmpi-1.2.4/gnu
cd $RPM_BUILD_DIR/openmpi-1.2.4_gnu <== unique subdirectories
from tar extract step above
make all install

%files intel  <==  I decided to go
with the "kitchen sink", i.e., everything, so every directory is listed.
%doc $RPM_BUILD_DIR/openmpi-1.2.4_intel/README
/usr/local/openmpi-1.2.4/intel/bin
/usr/local/openmpi-1.2.4/intel/etc
/usr/local/openmpi-1.2.4/intel/include
/usr/local/openmpi-1.2.4/intel/lib
/usr/local/openmpi-1.2.4/intel/share

%files gnu
%doc $RPM_BUILD_DIR/openmpi-1.2.4_gnu/README
/usr/local/openmpi-1.2.4/gnu/bin
/usr/local/openmpi-1.2.4/gnu/etc
/usr/local/openmpi-1.2.4/gnu/include
/usr/local/openmpi-1.2.4/gnu/lib
/usr/local/openmpi-1.2.4/gnu/share


J.W. (Pat) O'Bryant,Jr.
Business Line Infrastructure
Technical Systems, HPC
Office: 713-431-7022




 "Jim Kusznir" 
  To 
 Sent by: "Open MPI Users" 
 users-bounces@
 open-mpi.org   cc 

   Subject 
 12/30/07 11:50   [OMPI users] multi-compiler builds   
 PM   of OpenMPI (RPM) 


 Please respond
   to  
 Open MPI Users
 








Hi all:

I'm trying to set up a ROCKS cluster (CentOS 4.5) with OpenMPI and
GCC, PGI, and Intel compilers.  My understanding is that OpenMPI must
be compiled with each compiler.  The result (or at least, the runtime
libs) must be in .rpm format, as that is required by ROCKS compute
node deployment system.  I am also using environment modules to manage
users' environment and selecting which version of OpenMPI/compiler.

I have some questions, though.
1) am I correct in that OpenMPI needs to be complied with each
compiler that will be used with it?

I am currently trying to make rpms using the included .spec file
(contrib/dist/linux/openmpi.spec, IIRC).
2) How do I use it to build against different compilers and end up
with non-colliding namespaces, etc?
I am currently using the following comma

Re: [OMPI users] No output from mpirun

2007-12-31 Thread Amit Kumar Saha
On 12/31/07, Varun R  wrote:
> Yes, the 'mpirun' is the one from OpenMPI. And btw mpich worked perfectly
> for me. It's only ompi that's giving me these problems. Do I have to setup
> ssh or something? Because I remember doing that for mpich.

You have to set up password less SSH login on each of the 'remote'
host you are planning to spawn your tasks.

HTH,
Amit
-- 
Amit Kumar Saha
Writer, Programmer, Researcher
http://amitsaha.in.googlepages.com
http://amitksaha.blogspot.com


[OMPI users] Can't compile C++ program with extern "C" { #include mpi.h }

2007-12-31 Thread Adam C Powell IV
Greetings,

I'm trying to build the Salomé engineering simulation tool, and am
having trouble compiling with OpenMPI.  The full text of the error is at
http://lyre.mit.edu/~powell/salome-error .  The crux of the problem can
be reproduced by trying to compile a C++ file with:

extern "C"
{
#include "mpi.h"
}

At the end of mpi.h, the C++ headers get loaded while in extern C mode,
and the result is a vast list of errors.

It seems odd that one of the cplusplus variables is #defined while in
extern C mode, but that's what seems to be happening.

I see that it should be possible (and verified that it is possible) to
avoid this by using -DOMPI_SKIP_MPICXX.  But something in me says that
shouldn't be necessary, that extern C code should only include C headers
and not C++ ones...

This is using Debian lenny (testing) with gcc 4.2.1-6.

Thanks,
-Adam
-- 
GPG fingerprint: D54D 1AEE B11C CE9B A02B  C5DD 526F 01E8 564E E4B6

Engineering consulting with open source tools
http://www.opennovation.com/



Re: [OMPI users] Can't compile C++ program with extern "C" { #include mpi.h }

2007-12-31 Thread Brian Barrett

On Dec 31, 2007, at 7:12 PM, Adam C Powell IV wrote:


I'm trying to build the Salomé engineering simulation tool, and am
having trouble compiling with OpenMPI.  The full text of the error  
is at
http://lyre.mit.edu/~powell/salome-error .  The crux of the problem  
can

be reproduced by trying to compile a C++ file with:

extern "C"
{
#include "mpi.h"
}

At the end of mpi.h, the C++ headers get loaded while in extern C  
mode,

and the result is a vast list of errors.


Yes, it will.  Similar to other external packages (like system  
headers), you absolutely should not include mpi.h from an extern "C"  
block.  It will fail, as you've noted.  The proper solution is to not  
be in an extern "C" block when including mpi.h.


Brian


--
  Brian Barrett
  Open MPI developer
  http://www.open-mpi.org/





Re: [OMPI users] Can't compile C++ program with extern "C" { #include mpi.h }

2007-12-31 Thread Adam C Powell IV
On Mon, 2007-12-31 at 19:17 -0700, Brian Barrett wrote:
> On Dec 31, 2007, at 7:12 PM, Adam C Powell IV wrote:
> 
> > I'm trying to build the Salomé engineering simulation tool, and am
> > having trouble compiling with OpenMPI.  The full text of the error  
> > is at
> > http://lyre.mit.edu/~powell/salome-error .  The crux of the problem  
> > can
> > be reproduced by trying to compile a C++ file with:
> >
> > extern "C"
> > {
> > #include "mpi.h"
> > }
> >
> > At the end of mpi.h, the C++ headers get loaded while in extern C  
> > mode,
> > and the result is a vast list of errors.
> 
> Yes, it will.  Similar to other external packages (like system  
> headers), you absolutely should not include mpi.h from an extern "C"  
> block.  It will fail, as you've noted.  The proper solution is to not  
> be in an extern "C" block when including mpi.h.

Okay, fair enough for this test example.

But the Salomé case is more complicated:
extern "C"
{
#include 
}
What to do here?  The hdf5 prototypes must be in an extern "C" block,
but hdf5.h #includes a file which #includes mpi.h...

Thanks for the quick reply!

-Adam
-- 
GPG fingerprint: D54D 1AEE B11C CE9B A02B  C5DD 526F 01E8 564E E4B6

Engineering consulting with open source tools
http://www.opennovation.com/



Re: [OMPI users] Can't compile C++ program with extern "C" { #include mpi.h }

2007-12-31 Thread Brian Barrett

On Dec 31, 2007, at 7:26 PM, Adam C Powell IV wrote:


Okay, fair enough for this test example.

But the Salomé case is more complicated:
extern "C"
{
#include 
}
What to do here?  The hdf5 prototypes must be in an extern "C" block,
but hdf5.h #includes a file which #includes mpi.h...

Thanks for the quick reply!


Yeah, this is a complicated example, mostly because HDF5 should  
really be covering this problem for you.  I think your only option at  
that point would be to use the #define to not include the C++ code.


The problem is that the MPI standard *requires* mpi.h to include both  
the C and C++ interface declarations if you're using C++.  There's no  
way for the preprocessor to determine whether there's a currently  
active extern "C" block, so there's really not much we can do.  Best  
hope would be to get the HDF5 guys to properly protect their code  
from C++...



Brian

--
  Brian Barrett
  Open MPI developer
  http://www.open-mpi.org/