> I think you missed Matt's point -- he was suggesting writing a single
script that just reacts accordingly to which host it is on and sets > the
environment variable before launching your back-end MPI executable.
I got it, but I would like to be able to do it without creating/copying new
script o
Hi
I don't understand why it is a problem to copy a single script to your nodes -
wouldn't the following shell-script work?
#!/bin/sh
for num in `seq 128`
do
scp new_script username@host_$num:path/to/workdir/
done
jody
On Mon, Mar 2, 2009 at 10:02 AM, Nicolas Deladerriere
wrote:
>>
On Mar 2, 2009, at 4:02 AM, Nicolas Deladerriere wrote:
> I think you missed Matt's point -- he was suggesting writing a
single script that just reacts accordingly to which host it is on
and sets > the environment variable before launching your back-end
MPI executable.
I got it, but I wou
I'm pretty sure that this particular VT compile issue has already been
fixed in the 1.3 series.
Lenny -- can you try the latest OMPI 1.3.1 nightly tarball to verify?
On Mar 1, 2009, at 4:54 PM, Lenny Verkhovsky wrote:
We saw the same problem with compilation,
the workaround for us was conf
On Mar 2, 2009, at 8:41 AM, Jeff Squyres wrote:
On Mar 2, 2009, at 4:02 AM, Nicolas Deladerriere wrote:
> I think you missed Matt's point -- he was suggesting writing a
single script that just reacts accordingly to which host it is on
and sets > the environment variable before launching yo
Hi,
Has anyone had success building openmpi with the 64 bit Lahey fortran
compiler? I have seen a previous thread about the problems with 1.2.6
and am wondering if any progress has been made.
I can build individual libraries by removing -rpath and -soname, and by
compiling the respective obj