On Oct 12, 2006, at 4:14 PM, Warner Yuen wrote:
I've just built BLACS using the latest beta: openmpi-1.1.2rc4 as
well as openmpi-1.1.1.
1.1.2rc4 should be fine; however, I don't think a new version named
1.1.1 was released and it should fail on some or all platforms.
I am getting the fol
Hello OMPIers...
I know there have been a lot of emails going back and forth regarding
BLACS recently, but I've ignored most of them until now. Why, because
now I'm trying to build and test it...so it matters to me. ;-). Any
way, I've just built BLACS using the latest beta: openmpi-1.1.2rc4
I recently installed OpenMPI 1.1.1 on an Apple Xserve cluster. Xgrid
is also setup and installed (with the latest updates). I am able to
run mpi jobs without xgrid and control the spawning of processes on
particular nodes. I am also able to run jobs through Xgrid (using
mpirun). My issu
On Oct 12, 2006, at 8:23 AM, amane001 wrote:
Thanks for your reply. I actually meant OpenMPI
Am 12.10.2006 um 09:52 schrieb amane001:
> the code below. Even if I set the OMP_NUM_THREADS = 2, the print
> setenv OMP_NUM_THREADS 2
These are OpenMP, not OpenMPI environmental variables.
/u
On Oct 11, 2006, at 10:38 AM, Lisandro Dalcin wrote:
On 10/11/06, Jeff Squyres wrote:
Open MPI v1.1.1 requires that you set your LD_LIBRARY_PATH to
include the
directory where its libraries were installed (typically, $prefix/
lib). Or,
you can use mpirun's --prefix functionality to avoid
Hi,
I ran the program with the debug flag (-d) and got the following
(hopefully it helps)...
Thanks,
Matt
[master:32399] [0,0,0] setting up session dir with
[master:32399] universe default-universe
[master:32399] user cuppm
[master:32399] host master
[master:32399] jobid 0
[master:32399] p
Thanks for your reply. I actually meant OpenMPI (from open-mpi.org) and I
have compiled that using
./configure FC=ifort F77=ifort F90=ifort --prefix=$OPENMP_DIR
I may be asking some dumb questions here, but I'm really a beginner to
please bear with me.
~Amane
On 10/12/06, Reuti wrote:
Hi,
Hi,
Am 12.10.2006 um 09:52 schrieb amane001:
Hello all,
I recently switched to OpenMP from LAM-MPI for my code. I'm trying
to run my test code with a PBS scheduler on our local cluster. The
PBS script is shown below. When the job is executed however, only
one CPU is used for running the
Hello all,
I recently switched to OpenMP from LAM-MPI for my code. I'm trying to run my
test code with a PBS scheduler on our local cluster. The PBS script is shown
below. When the job is executed however, only one CPU is used for running
the test.exe. Another more confusing aspect is the fact th