Shaun Jackman wrote:
Eugene Loh wrote:
I'm no expert, but I think it's something like this:
1) If the messages are short, they're sent over to the receiver. If
the receiver does not expect them (no MPI_Irecv posted), it buffers
them up.
2) If the messages are long, only a little bit is s
Eugene Loh wrote:
I'm no expert, but I think it's something like this:
1) If the messages are short, they're sent over to the receiver. If the
receiver does not expect them (no MPI_Irecv posted), it buffers them up.
2) If the messages are long, only a little bit is sent over to the
receiver
Eugene Loh wrote:
I'm no expert, but I think it's something like this:
1) If the messages are short, they're sent over to the receiver. If the
receiver does not expect them (no MPI_Irecv posted), it buffers them up.
2) If the messages are long, only a little bit is sent over to the
receiver
Dear OpenMPI experts
I had some trouble to build OpenMPI 1.3.1 on a CentOS 5.2 Linux x86_64
computer.
The configure scripts seem to have changed, and work different
than before, particularly w.r.t. additional libraries like numa,
torque, and openib.
The new behavior can be a bit unexpected and p
Shaun Jackman wrote:
When running my Open MPI application, I'm seeing three processors that
are using five times as much memory as the others when they should all
use the same amount of memory. To start the debugging process, I would
like to know if it's my application or the Open MPI library
When running my Open MPI application, I'm seeing three processors that
are using five times as much memory as the others when they should all
use the same amount of memory. To start the debugging process, I would
like to know if it's my application or the Open MPI library that's
using the addit
Hi Francesco
Francesco Pietra wrote:
Hi:
As failure to find "limits.h" in my attempted compilations of Amber of
th fast few days (amd64 lenny, openmpi 1.3.1, intel compilers
10.1.015) is probably (or I hope so) a bug of the version used of
intel compilers (I made with debian the same observation
--with-tm=/usr/local/torque2.1.8/
If you are using rsh, then why did you specify to build with Torque?
The deprecated parameter is just a warning and shouldn't cause a
problem. The current param would be to specify -mca plm_rsh_agent rsh.
No big deal, though - either will work.
You migh
Hi all,
I've build an openmpi-1.3.1 binary on a machine with openSUSE 10.2, gcc
4.1.2。
The cluster I tried to run jobs on uses Silverstorm infiniband system and
uses ibverbs as its IB driver. The configure parameters listed below:
./configure --prefix=/d/thshie/vp31/openmpi/
--with-openib=/usr/lo
Hi:
As failure to find "limits.h" in my attempted compilations of Amber of
th fast few days (amd64 lenny, openmpi 1.3.1, intel compilers
10.1.015) is probably (or I hope so) a bug of the version used of
intel compilers (I made with debian the same observations reported
for gentoo,
http://software.
hostname never calls MPI_Init, and therefore never initializes the BTL
subsystem. Therefore, hostname will always run correctly.
mpirun is not an MPI process, nor are the daemons used by OMPI to
launch the MPI job. Thus, they also do not call MPI_Init, and
therefore do not see the BTL subsy
How is this possible?
dx:~> mpirun -v -np 2 --mca btl self,sm --host dx,sx hostname
dx
sx
dx:~> netstat -i
Kernel Interface table
Iface MTU Met RX-OK RX-ERR RX-DRP RX-OVR TX-OK TX-ERR TX-DRP TX-
OVR Flg
eth0 1500 0 998755 0 0 0 1070323 0
0 0 BMRU
eth1
12 matches
Mail list logo