I installed Open MPI 1.2.4 in Red Hat Enterprise Linux 3. It worked
fine in normal usage. I tested executions with extremely many
processes.

$ mpiexec -n 128 --host node0 --mca btl_tcp_if_include eth0 --mca
mpool_sm_max_size 2147483647 ./cpi

$ mpiexec -n 256 --host node0,node1 --mca btl_tcp_if_include eth0 --mca
mpool_sm_max_size 2147483647 ./cpi

If I specified the mpiexec options like this, the execution succeeded
in both cases.

$ mpiexec -n 256 --host node0 --mca btl_tcp_if_include eth0 --mca
mpool_sm_max_size 2147483647 ./cpi

mpiexec noticed that job rank 0 with PID 0 on node node0 exited on
signal 15 (Terminated).
252 additional processes aborted (not shown)

$ mpiexec -n 512 --host node0,node1 --mca btl_tcp_if_include eth0 --mca
mpool_sm_max_size 2147483647 ./cpi

mpiexec noticed that job rank 0 with PID 0 on node node0 exited on
signal 15 (Terminated).
505 additional processes aborted (not shown)

If I increased the number of processes, the executions aborted.
Could I execute 256 processes per node using Open MPI?

We would like to execute as large number of processes as possible even if
the performance becomes worse.
If we use MPICH, we can execute 256 processes per node,
but the performance of Open MPI may be better.
We understand we must increase the number of open files next because
the current implementation uses many sockets.

SUSUKITA, Ryutaro
Peta-scale System Interconnect Project
Fukuoka Industry, Science & Technology Foundation

Reply via email to