On Linux you needn't initialise the dat registry. Your program prints:
"provider 1: OpenIB-cma". I successfully tested INTEL MPI and mvapich2
with uDAPL .
Andreas
Donald Kerr wrote:
Andreas,
I am going to guess at a minimum the interfaces are up and you can
ping them. On Solaris there is
On Monday 23 April 2007, Bert Wesarg wrote:
> Hello all,
>
> Please give a short description of the machine and send the
> cpu-topology.tar.bz2 to the list.
short description in filenames, kernel is 2.6.18-8.1.1.el5
/Peter
cpu-topology-dualOpteron2216HE-rhel5_64.tar.bz2
Description: application
On 4/26/07, Bruce Foster wrote:
The README instructions for PGI compilation have a typo:
Current context:
- The Portland Group compilers require the "-Msignextend" compiler
flag to extend the sign bit when converting from a shorter to longer
integer. This is is different than other compil
I am pleased to announce that Open MPI now supports checkpoint/
restart process fault tolerance. This new feature is supported on the
current development trunk as of r14519. This new feature is currently
scheduled for release in the version 1.3 series of Open MPI.
The current implementation
I have not tried Open MPI uDAPL on Linux nor do I have access to a Linux
box so I am having a difficult time trying to find a way to help you
debug this issue.
-DON
Andreas Kuntze wrote:
On Linux you needn't initialise the dat registry. Your program prints:
"provider 1: OpenIB-cma". I succes
Which uDAPL implementation are you using, over what sort of network?
I'm guessing OpenIB/InfiniBand, but want to make sure.
One other thing I noticed, you say native IB works, yet looking at your
ompi_info/config.log neither OpenIB nor MVAPI support was enabled.
Andrew
Andreas Kuntze wrote:
Hello,
I'm having a weird problem while using the MPI_Comm_Accept (C) or the
MPI::Comm::Accept (C++ bindings).
My "server" runs until the call to this function but if there's no
client
connecting, it sits there eating all CPU (100%), although if a client connects
the loop works
Hi
I have been testing OpenMPI 1.2, and now 1.2.1, on several BProc-
based clusters, and I have found some problems/issues. All my
clusters have standard ethernet interconnects, either 100Base/T or
Gigabit, on standard switches.
The clusters are all running Clustermatic 5 (BProc 4.x), and range
You can eliminate the "[n17:30019] odls_bproc: openpty failed, using
pipes instead" message by configuring OMPI with the --disable-pty-
support flag, as there is a bug in BProc that causes that to happen.
-david
--
David Gunter
HPC-4: HPC Environments: Parallel Tools Team
Los Alamos National L
There is a known issue on BProc 4 w.r.t. pty support. Open MPI by
default will try to use ptys for I/O forwarding but will revert to
pipes if ptys are not available.
You can "safely" ignore the pty warnings, or you may want to rerun
configure and add:
--disable-pty-support
I say "safely"
Dear Users:
I have been trying to use the intel ifort and icc compilers to
compile an atmospheric model called the Weather Research &
Forecasting model (WRFV2.2) on a Linux Cluster (x86_64) using Open-
MPI v1.2 that were also compiled with INTEL ICC. However, I got a
lot of error messag
11 matches
Mail list logo