Jason,
two other lesser-known wrappers are available :
mpirun --mca orte_launch_agent --mca orte_fork_agent
a.out
instead of "exec orted", mpirun will "exec a.out"
if i understand correctly the issue you are trying to solve, you might
simply
mpirun from /tmp (assuming /tmp is availab
On Nov 28, 2016, at 1:04 PM, Jason Patton wrote:
>
> Passing --wdir to mpirun does not solve this particular case, I
> believe. HTCondor sets up each worker slot with a uniquely named
> sandbox, e.g. a 2-process job might have the user's executable copied
> to /var/lib/condor/execute/dir_11955 on
Passing --wdir to mpirun does not solve this particular case, I
believe. HTCondor sets up each worker slot with a uniquely named
sandbox, e.g. a 2-process job might have the user's executable copied
to /var/lib/condor/execute/dir_11955 on one machine and
/var/lib/condor/execute/dir_3484 on another
On Nov 28, 2016, at 12:16 PM, Jason Patton wrote:
>
> We do assume that Open MPI is installed in the same location on all
> execute nodes, and we set that by passing --prefix $OPEN_MPI_DIR to
> mpirun. The ssh wrapper script still tells ssh to execute the PATH,
> LD_LIBRARY_PATH, etc. definitions
We do assume that Open MPI is installed in the same location on all
execute nodes, and we set that by passing --prefix $OPEN_MPI_DIR to
mpirun. The ssh wrapper script still tells ssh to execute the PATH,
LD_LIBRARY_PATH, etc. definitions that mpirun feeds it. However, the
location of the mpicc-comp
I'm not sure I understand your solution -- it sounds like you are overriding
$HOME for each process...? If so, that's playing with fire.
Is there a reason you can't set PATH / LD_LIBRARY_PATH in your ssh wrapper
script to point to the Open MPI installation that you want to use on each node?
To
I think I may have solved this, in case anyone is curious or wants to
yell about how terrible it is :). In the ssh wrapper script, when
ssh-ing, before launching orted:
export HOME=${your_working_directory} \;
(If $HOME means something for you jobs, then maybe this isn't a good solution.)
Got th
I would like to mpirun across nodes that do not share a filesystem and
might have the executable in different directories. For example, node0
has the executable at /tmp/job42/mpitest and node1 has it at
/tmp/job100/mpitest.
If you can grant me that I have a ssh wrapper script (that gets set as
the