On Feb 24, 2006, at 7:24 PM, Greg Lindahl wrote:
On Fri, Feb 24, 2006 at 04:44:19PM +0100, Benoit Semelin wrote:
Rainer wrote:
PS: When compiling OpenMPI are You using a combination of gcc for
C/C++
and ifort for Fortan compilation?
This will not work, as the compilers have different views
On Feb 24, 2006, at 10:11 PM, Allan Menezes wrote:
I have a 16 node AMD/P4 machine cluster running Oscar 4.2.1 Beta and
FC4. Each machine has two Gigabit network cards. One being realtek8169
all connected to a netgear GS116 gigabit switch with max MTU =1500 and
the other NIC being aDlink Sysko
Note that this is correct MPI behavior -- the MPI standard does not
define whether MPI_SEND blocks or not. Indeed, codes that assume
that MPI_SEND blocks (or doesn't block) are technically not correct
MPI codes. The issue is that different networks (e.g., shared memory
vs. TCP) may have d
On Feb 22, 2006, at 3:29 PM, Aniruddha Shet wrote:
I tried with openmpi-1.1a1r9098.tar.bz2 but still encounter the same
problem.
There is no core being produced. I am sending you whatever output
trace is written. Not sure if the attached trace will allow you to
debug the problem.
I'm not
On Feb 23, 2006, at 1:59 PM, Konstantin Kudin wrote:
What is the approximate time frame for officially releasing version
1.1 ? High performance "alltoall" will be of great use for a whole
bunch of packages where the most challenging parallel part is
distributed FFTs, which usually rely on "allt