Thanks, Ralph.
*** quote begin *
Let me get this straight. You are executing mpirun from inside a c-
shell script, launching onto nodes where you will by default be
running bash. The param I gave you should support that mode - it
basically tells OMPI to probe the remote n
Let me get this straight. You are executing mpirun from inside a c-shell
script, launching onto nodes where you will by default be running bash. The
param I gave you should support that mode - it basically tells OMPI to probe
the remote node to discover what shell it will run under there, and th
Thanks, Ralph.
Your information is very deep and detailed.
I tried with your suggestion to set ""-mca
plm_rsh_assume_same_shell 0", it still does not work though. My
situation is that we start a c-shell script from bash shell, which in
turn invokes mpirun to other slave nodes. These slave nodes
On Jun 28, 2011, at 3:52 PM, ya...@adina.com wrote:
> Thanks, Ralph!
>
> a) Yes, I know I could use only IB by "--mca btl openib", but just
> want to make sure I am using IB interfaces. I am seeking an option
> to mpirun to print out the actual interconnect protocol, like --prot to
> mpirun i
On Jun 28, 2011, at 3:52 PM, ya...@adina.com wrote:
> Thanks, Ralph!
>
> a) Yes, I know I could use only IB by "--mca btl openib", but just
> want to make sure I am using IB interfaces. I am seeking an option
> to mpirun to print out the actual interconnect protocol, like --prot to
> mpirun i
Thanks, Ralph!
a) Yes, I know I could use only IB by "--mca btl openib", but just
want to make sure I am using IB interfaces. I am seeking an option
to mpirun to print out the actual interconnect protocol, like --prot to
mpirun in MPICH2.
b) Yes, my default shell is bash, but I run a c-shell s
On Jun 28, 2011, at 9:05 AM, ya...@adina.com wrote:
> Hello All,
>
> I installed Open MPI 1.4.3 on our new HPC blades, with Infiniband
> interconnection.
>
> My system environments are as:
>
> 1)uname -a output:
> Linux gulftown 2.6.18-194.el5 #1 SMP Tue Mar 16 21:52:39 EDT
> 2010 x86_64 x
Hello All,
I installed Open MPI 1.4.3 on our new HPC blades, with Infiniband
interconnection.
My system environments are as:
1)uname -a output:
Linux gulftown 2.6.18-194.el5 #1 SMP Tue Mar 16 21:52:39 EDT
2010 x86_64 x86_64 x86_64 GNU/Linux
2) /home is mounted over all nodes, and mpirun is