Hah! Your reply came in seconds after I replied.
Your questions made me notice that we're missing a FAQ entry for the
"ssh:rsh" explanation, though, so I'll add an entry for that. Thanks.
On May 18, 2007, at 5:15 PM, Steven Truong wrote:
Hi, Jeff. Ok. After reading through the FAQ, I m
On May 18, 2007, at 5:01 PM, Steven Truong wrote:
So my shell might have exited when it detect that I ran
non-interactively. But then again, how this parameter
MCA pls: parameter "pls_rsh_agent" (current value: "ssh :rsh")
affect my outcome?
It means that OMPI is going to first look for ssh,
Hi, Jeff. Ok. After reading through the FAQ, I modified .bashrc to
set PATH and LD_LIBRARY_PATH and now I could execute:
[struong@neptune ~]$ ssh node07 which orted /usr/local/openmpi-1.2.1/bin/orted
[struong@neptune ~]$ /usr/local/openmpi-1.2.1/bin/mpirun --host node07
hostname node07.nanostel
Hi, Jeff. Thanks so very much for all your helps so far. I decided
that I needed to go back and check whether openmpi even works for
simple cases, so here I am.
So my shell might have exited when it detect that I ran
non-interactively. But then again, how this parameter
MCA pls: parameter "pl
On May 18, 2007, at 4:38 PM, Steven Truong wrote:
[struong@neptune 4cpu4npar10nsim]$ mpirun --mca btl tcp,self -np 1
--host node07 hostname
bash: orted: command not found
As you noted later in your mail, this is the key problem: orted is
not found on the remote node.
Notice that you are c
Hi, all. Once again, I am ver y frustrated with what I have run into so far.
My system is CentOS 4.4 x86_64, ifort 9.1.043, torque, maui.
I configured openmpi 1.2.1 with this command.
./configure --prefix=/usr/local/openmpi-1.2.1
--with-tm=/usr/local/pbs --enable-static
Now I just tried to run
On May 18, 2007, at 5:25 PM, Adrian Knoth wrote:
If you don't want to parse dynamic ports or you don't want to lower
your
MPI performance due to --enable-debug, you can easily change the
code to
use a static port:
As the linux kernel need some time before completely cleaning up the
sock
On Sat, May 19, 2007 at 08:36:50AM +1200, Code Master wrote:
> Suppose if I want to capture any packets for my openmpi program, if I
> can't filter packets by ports, then how can the sniffer tell which packets
> are from/to any processes of my penmpi program?
You first have to distinguish between
Keep in mind that there are two kinds of TCP traffic that OMPI uses:
- "OOB" (out of band, meaning non-MPI): startup protocols,
communication with mpirun, etc. This is probably not interesting to
you.
- MPI: the back-end to MPI_SEND and friends.
What I have done is get 2 nodes on my clust
Suppose if I want to capture any packets for my openmpi program, if I can't
filter packets by ports, then how can the sniffer tell which packets are
from/to any processes of my penmpi program?
On 5/19/07, Tim Prins wrote:
Open MPI uses TCP, and does not use any fixed ports. We use whatever por
Why do you set the TCP interface you want to use ? I would try with
"OMPI_MCA_btl=mx,self" which will make sure you're not using TCP at all.
Can you provide the config.log file ? It will allow me to see which
(if any) extensions of MX you're using.
Thanks,
george.
On May 18, 2007, a
Much better thanks!
---
$ env | grep OMPI
OMPI_MCA_rmaps_base_schedule_policy=node
OMPI_MCA_pml=cm
OMPI_MCA_btl=^openib
OMPI_MCA_oob_tcp_include=eth0
OMPI_MCA_mpi_keep_hostnames=1
$ mpiexec -pernode -np 2 IMB-MPI1 SendRecv
#---
#Intel (R) MPI Ben
Can you try adding the following param:
OMPI_MCA_pml=cm
and report the results?
Thanks,
Galen
On May 18, 2007, at 1:15 PM, Maestas, Christopher Daniel wrote:
Hello,
I was wondering why we would see ~ 100MB/s difference between mpich-mx
and Open MPI with SendRecv from the Intel MPI benchma
Hello,
I was wondering why we would see ~ 100MB/s difference between mpich-mx
and Open MPI with SendRecv from the Intel MPI benchmarks. Maybe I'm
missing turning something on?
The hardware is:
---
# mx_info -q
MX Version: 1.1.7
MX Build: root@tocc1:/projects/global/SOURCES/myricom/mx-1.1.7 Fri M
Open MPI uses TCP, and does not use any fixed ports. We use whatever ports the
operating system gives us. At this time there is no way to specify what ports
to use.
Hope this helps,
Tim
On Friday 18 May 2007 05:19 am, Code Master wrote:
> I run my openmpi-based application in a multi-node clus
I run my openmpi-based application in a multi-node cluster. There is also a
sniffer computer (installed with wireshark) attached to a listener port on
the switch to sniff any packets.
However I would like to know the protocol (UDP or TCP) as well as the ports
used by openmpi for interprocess com
16 matches
Mail list logo