I would suggest making sure that the /etc/beowulf/config file has a
"libraries" line for every directory where required shared libraries
(application and mpi) are located.
Also, make sure that the filesystems containing the executables and
shared libraries are accessible from the compute nodes.
ions will be in 1.2.4.
Ralph
On 7/23/07 7:04 AM, "Kelley, Sean" wrote:
> Hi,
>
> We are experiencing a problem with the process allocation on our Open MPI
> cluster. We are using Scyld 4.1 (BPROC), the OFED 1.2 Topspin Infiniband
> drivers, Open MPI 1.2.3 + patch
Hi,
We are experiencing a problem with the process allocation on our Open MPI
cluster. We are using Scyld 4.1 (BPROC), the OFED 1.2 Topspin Infiniband
drivers, Open MPI 1.2.3 + patch (to run processes on the head node). The
hardware consists of a head node and N blades on private ethernet
ittle.
[Sean] I appreciate the help. We are running processes on the head node because
the head node is the only node which can access external resources (storage
devices).
Ralph
On 6/11/07 1:04 PM, "Kelley, Sean" wrote:
I forgot to add that we are using 'bproc
I forgot to add that we are using 'bproc'. Launching processes on the compute
nodes using bproc works well, I'm not sure if bproc is involved when processes
are launched on the local node.
Sean
From: users-boun...@open-mpi.org on behalf of Ke
Hi,
We are running the OFED 1.2rc4 distribution containing openmpi-1.2.2 on a
RedhatEL4U4 system with Scyld Clusterware 4.1. The hardware configuration
consists of a DELL 2950 as the headnode and 3 DELL 1950 blades as compute nodes
using Cisco TopSpin Infiband HCAs and switches for the int