Re: [OMPI users] job running question

2006-04-10 Thread Adams Samuel D Contr AFRL/HEDR
I set bash to have unlimited size core files like this: $ ulimit -c unlimited But, it was not dropping core files for some reason when I was running with mpirun. Just to make sure it would do what I expected, I wrote a little C program that was kind of like this int ptr = 4; fprintf(stderr,"bad

Re: [OMPI users] Building 32-bit OpenMPI package for 64-bit Opteron platform

2006-04-10 Thread David Gunter
After much fiddling around, I managed to create a version of open-mpi that would actually build. Unfortunately, I can't run the simplest of applications with it. Here's the setup I used: export CC=gcc export CXX=g++ export FC=gfortran export F77=gfortran export CFLAGS="-m32" export CXXFLAGS

Re: [OMPI users] job running question

2006-04-10 Thread Pavel Shamis (Pasha)
Mpirun opens separate shell on each machine/node, so the "ulimit" will not be available in new sheel. I think if you will add "ulimit -c unlimited" to you default shell configuration file (~/.bashrc in BASH case ant ~/.tcshrc in TCSH/CSH case) you will find your core files :) Regards, Pavel S

Re: [OMPI users] Building 32-bit OpenMPI package for 64-bit Opteron platform

2006-04-10 Thread Brian Barrett
On Apr 10, 2006, at 9:43 AM, David Gunter wrote: After much fiddling around, I managed to create a version of open-mpi that would actually build. Unfortunately, I can't run the simplest of applications with it. Here's the setup I used: export CC=gcc export CXX=g++ export FC=gfortran export F7

Re: [OMPI users] Building 32-bit OpenMPI package for 64-bit Opteron platform

2006-04-10 Thread Ralph Castain
I'm not an expert on the configure system, but one thing jumps out at me immediately - you used "gcc" to compile your program. You really need to use "mpicc" to do so. I think that might be the source of your errors. Ralph David Gunter wrote: After much fiddling around, I managed to crea

Re: [OMPI users] Building 32-bit OpenMPI package for 64-bit Opteron platform

2006-04-10 Thread David Gunter
I've attached the config.log and configure output files. The OS on the machine is (flashc 119%) cat /etc/redhat-release Red Hat Linux release 9 (Shrike) (flashc 120%) uname -a Linux flashc.lanl.gov 2.4.24-cm32lnxi6plsd2pcsmp #1 SMP Thu Mar 10 15:27:12 MST 2005 i686 athlon i386 GNU/Linux -

Re: [OMPI users] Building 32-bit OpenMPI package for 64-bit Opteron platform

2006-04-10 Thread David Gunter
The problem with doing it that way is that is disallows our in-hose code teams from using their compilers of choice. Prior to open-mpi we have been using LA-MPI. LA-MPI has always been compiled in such a way that it wouldn't matter what other compilers were used to build mpi applications p

Re: [OMPI users] Building 32-bit OpenMPI package for 64-bit Opteron platform

2006-04-10 Thread Brian Barrett
For Linux, this isn't too big of a problem, but you might want to take a look at the output of "mpicc -showme" to get an idea of what compiler flags / libraries would be added if you used the wrapper compilers. I think for Linux the only one that might at all matter is -pthread. But I di

Re: [OMPI users] Building 32-bit OpenMPI package for 64-bit Opteronplatform

2006-04-10 Thread Jeff Squyres (jsquyres)
FWIW, on an Indiana University Opteron system running RHEL4, I was able to compile Open MPI v1.0.2 in 32 bit mode with: ./configure --prefix=/u/jsquyres/x86_64-unknown-linux-gnu/bogus CFLAGS=-m32 CXXFLAGS=-m32 FFLAGS=-m32 FCFLAGS=-m32 I then successfully built and ran an MPI executable with: she

Re: [OMPI users] Building 32-bit OpenMPI package for 64-bit Opteron platform

2006-04-10 Thread David Gunter
Here are the results using mpicc: (ffe-64 153%) mpicc -o send4 send4.c /usr/bin/ld: skipping incompatible /net/scratch1/dog/flash64/openmpi/ openmpi-1.0.2-32b/lib/libmpi.so when searching for -lmpi /usr/bin/ld: cannot find -lmpi collect2: ld returned 1 exit status (ffe-64 154%) mpicc -showme g

Re: [OMPI users] Building 32-bit OpenMPI package for 64-bit Opteronplatform

2006-04-10 Thread David Gunter
This is what I have just discovered - mpicc didn't have -m32 in it. Thanks for the other info (config list)! -david On Apr 10, 2006, at 8:56 AM, Jeff Squyres ((jsquyres)) wrote: The extra "-m32" was necessary because the wrapper compiler did not include the CFLAGS from the configure line (we

Re: [OMPI users] Building 32-bit OpenMPI package for 64-bit Opteronplatform

2006-04-10 Thread Jeff Squyres (jsquyres)
FWIW, I know that the LANL developers on the Open MPI team tend to build Open MPI statically on LANL Bproc systems for almost exactly this reason (i.e., shared libraries are not always transparently moved to the back-end nodes when executables run out there). You can build Open MPI statically like

Re: [OMPI users] Building 32-bit OpenMPI package for 64-bit Opteron platform

2006-04-10 Thread Brian Barrett
On Apr 10, 2006, at 11:07 AM, David Gunter wrote: (flashc 105%) mpiexec -n 4 ./send4 [flashc.lanl.gov:09921] mca: base: component_find: unable to open: / lib/libc.so.6: version `GLIBC_2.3.4' not found (required by /net/ scratch1/dog/flash64/openmpi/openmpi-1.0.2-32b/lib/openmpi/ mca_paffinity_li

[OMPI users] Building OpenMPI on OS X Tiger with gcc-3.3

2006-04-10 Thread Charles Williams
Hi, I have been attempting to build OpenMPI on my Mac, using the older gcc-3.3 compiler using rc2r9567. Things proceed for a while, and then I get: Making all in xgrid /Users/willic3/build/openmpi-buildgcc3.3/orte/dynamic-mca/pls/xgrid depbase=`echo src/pls_xgrid_component.lo | sed 's|[^/]

[OMPI users] Funny ./configure option

2006-04-10 Thread Troy Telford
This isn't a problem for me; it's more of a bugreport. When running ./configure --help, (on Open MPI 1.0.2) I found the following source of amusement: --enable-smp-locks disable smp locks in atomic ops (default: enabled) It occured to me that this may cause a bit of confusion... W

Re: [OMPI users] Funny ./configure option

2006-04-10 Thread Brian Barrett
On Apr 10, 2006, at 11:39 AM, Troy Telford wrote: This isn't a problem for me; it's more of a bugreport. When running ./configure --help, (on Open MPI 1.0.2) I found the following source of amusement: --enable-smp-locks disable smp locks in atomic ops (default: enabled) It occured

Re: [OMPI users] any checkpoint/restart function in Open-MPI?

2006-04-10 Thread Josh Hursey
There is currently no support for checkpoint/restart functionality in Open MPI. However, it is under active development and will support the functionality the LAM/MPI supported, and the BLCR checkpointer. --Josh On Apr 7, 2006, at 12:58 PM, Mars Lenjoy wrote: just like the BLCR in LAM/MPI.

Re: [OMPI users] job running question

2006-04-10 Thread Adams Samuel D Contr AFRL/HEDR
I put in /etc/bashrc and opened a new shell, but I still am not seeing any core files. Sam Adams General Dynamics - Network Systems Phone: 210.536.5945 -Original Message- From: users-boun...@open-mpi.org [mailto:users-boun...@open-mpi.org] On Behalf Of Pavel Shamis (Pasha) Sent: Monday,

Re: [OMPI users] job running question

2006-04-10 Thread Jeff Squyres (jsquyres)
Did you put this on /etc/basrhc on all nodes in question? It is usually easier to modify your own personal startup files, such as $HOME/.bashrc, etc. See the OMPI FAQ if you need help picking the right shell startup file for your environment. You might want to modify your shell startup files an

Re: [OMPI users] job running question

2006-04-10 Thread Michael Kluskens
You need to confirm that /etc/bashrc is actually being read in that environment, bash is a little different on which files get read depending on whether you login interactively or not. Also, I don't think ~/.bashrc is read on a noninteractive login. Michael On Apr 10, 2006, at 1:06 PM, Adam

[OMPI users] Building OpenMPI on OS X Tiger with gcc-3.3

2006-04-10 Thread Warner Yuen
Hi Charles, I've only ever seen that error when trying to build OpenMPI with the IBM XLC compilers on Tiger. I just now successfully configured and built OpenMPI-1.0.2 using gcc 3.3 build 1819 and IBM XLF. ./configure --disable-mpi-f90 --prefix=/hpc/mpis/ompi102f77 Please note that I can a

[OMPI users] Building OMPI-1.0.2 on OS X v10.3.9 with IBM XLC +XLF

2006-04-10 Thread Warner Yuen
I'm running Mac OS X v 10.3.9 Panther and tried to get OpenMPI to compile with IBM XLC and XLF. The compilation failed, any ideas what might be going wrong? I used the following settings: export CC=/opt/ibmcmp/vacpp/6.0/bin/xlc export CXX=/opt/ibmcmp/vacpp/6.0/bin/xlc++ export CFLAGS="-O3" ex

Re: [OMPI users] Building OpenMPI on OS X Tiger with gcc-3.3

2006-04-10 Thread Charles Williams
I just tried this again (with the 1.0.2 version just released), but without explicitly setting any variables other than F77, and it worked. The only problem with this approach is that I had previously set CC=gcc-3.3, etc., so that my mpicc would have gcc-3.3 even if that was not my default

Re: [OMPI users] Building OMPI-1.0.2 on OS X v10.3.9 with IBM XLC +XLF

2006-04-10 Thread David Daniel
Perhaps this is a bug in xlc++. Maybe this one... http://www-1.ibm.com/support/docview.wss?uid=swg1IY78555 My (untested) guess is that removing the const_cast will allow it to compile, i.e. in ompi/mpi/cxx/group_inln.h replace const_cast(ranges) by ranges David On Apr 10,

[OMPI users] ORTE errors

2006-04-10 Thread Michael Kluskens
The ORTE errors again, these are new and different errors. Tested as of OpenMPI 1.1a1r9596. [host:10198] [0,0,0] ORTE_ERROR_LOG: Not found in file base/ soh_base_get_proc_soh.c at line 80 [host:10198] [0,0,0] ORTE_ERROR_LOG: Not found in file base/ oob_base_xcast.c at line 108 [host:10198]

Re: [OMPI users] ORTE errors

2006-04-10 Thread Ralph Castain
Was this the only output you received? If so, then it looks like your parent process never gets to spawn and bcast - you should have seen your write statements first, yes? Ralph Michael Kluskens wrote: The ORTE errors again, these are new and different errors.  Tested as of  OpenMPI 1.1a1r95

[OMPI users] Incorrect behavior for attributes attached to MPI_COMM_SELF.

2006-04-10 Thread Audet, Martin
Hi, It looks like there is a problem in OpenMPI 1.0.2 with how MPI_COMM_SELF attributes callback functions are handled by MPI_Finalize(). The following C program register a callback function associated with the MPI_COMM_SELF communicator to be called during the first steps of MPI_Finalize(). A

[OMPI users] Error while loading shared libraries

2006-04-10 Thread Aniruddha Shet
Hi, I have built OpenMPI using ifort and icc Intel compilers with --enable-static --disable-shared options. I compile my job using OpenMPI wrapper compilers, additionally with -static option. When I run the job, I get the error 'orted:error while loading shared libraries: libcprts.so.5: canno

[OMPI users] MPI Jobs Hang on OS X XServe Cluster

2006-04-10 Thread Lee D. Peterson
Dear OpenMPI, I'm transitioning from LAM-MPI to OpenMPI and have just compiled OMPI 1.0.2 on OS X server 10.4.6. I'm using gcc 3.3 and XLF (both f77 and f90), and I'm using ssh to run the jobs. My cluster is all G5 dual 2GHz+ xserves, and I am using both ethernet ports for communication.