Geoff Galitz wrote:
On Jan 24, 2007, at 7:03 AM, Pak Lui wrote:
Geoff Galitz wrote:
Hello,
On the following system:
OpenMPI 1.1.1
SGE 6.0 (with tight integration)
Scientific Linux 4.3
Dual Dual-Core Opterons
MPI jobs are oversubscribing to the nodes. No matter where jobs
are launched by th
On Jan 24, 2007, at 7:03 AM, Pak Lui wrote:
Geoff Galitz wrote:
Hello,
On the following system:
OpenMPI 1.1.1
SGE 6.0 (with tight integration)
Scientific Linux 4.3
Dual Dual-Core Opterons
MPI jobs are oversubscribing to the nodes. No matter where jobs
are launched by the scheduler, they al
Pak Lui wrote:
Geoff Galitz wrote:
Hello,
On the following system:
OpenMPI 1.1.1
SGE 6.0 (with tight integration)
Scientific Linux 4.3
Dual Dual-Core Opterons
MPI jobs are oversubscribing to the nodes. No matter where jobs are
launched by the scheduler, they always stack up on the first
Geoff Galitz wrote:
Hello,
On the following system:
OpenMPI 1.1.1
SGE 6.0 (with tight integration)
Scientific Linux 4.3
Dual Dual-Core Opterons
MPI jobs are oversubscribing to the nodes. No matter where jobs are
launched by the scheduler, they always stack up on the first node
(node00)
Hi Geoff
On 1/23/07 4:31 PM, "Geoff Galitz" wrote:
>
>
> Hello,
>
> On the following system:
>
> OpenMPI 1.1.1
> SGE 6.0 (with tight integration)
> Scientific Linux 4.3
> Dual Dual-Core Opterons
>
>
> MPI jobs are oversubscribing to the nodes. No matter where jobs are
> launched by the s