I forgot add some other details...
1) I am setting the affinity of each process to a specific core, explicitly in
my application (...with OS system call)
2) I enabled the 'use_eager_rdma' with the corresponding buffer limit at 32
KBytes( large enough to cover all my message sizes)
3) I set
Hello,
We have a random hungs of some applications (NAMD, Molpro, ...) when
using openib BTL.
We are using ompi 1.4.3 and ompi 1.3.4 compiled with icc intel compiler.
linux kernel : 2.6.18-128 RH, node have 8 cores.
OFED version : 3.2
ibv_devifno seems to be ok on all nodes.
Note that we do
On 12/17/2010 6:43 PM, Sashi Balasingam wrote:
Hi,
I recently started on an MPI-based, 'real-time', pipelined-processing
application, and the application fails due to large time-jitter in
sending and receiving messages. Here are related info -
1) Platform:
a) Intel Box: Two Hex-core, Intel Xeo
Hi,
Am 17.12.2010 um 23:34 schrieb Brock Palen:
> You can build openMPI without tm, which will disable it, or you can test
> first with a nasty option like:
>
> mpirun \
>--mca plm ^tm \
>--mca ras ^tm \
>--hostfile $PBS_NODEFILE \
>