On Feb 14, 2007, at 7:27 PM, Scott Atchley wrote:

On Feb 14, 2007, at 12:33 PM, Alex Tumanov wrote:

Hello,

I recently tried running HPLinpack, compiled with OMPI, over myrinet
MX interconnect. Running a simple hello world program works, but XHPL
fails with an error occurring when it tries to MPI_Send:

# mpirun -np 4 -H l0-0,c0-2 --prefix $MPIHOME --mca btl mx,self

If you are running more than one process per node, you may need to
add shmem to mx,self.

The MX BTL version available on the Open MPI trunk (and sooner on the 1.2) support shared memory communications as well as self communications via MX. "ompi_info --param btl mx" is your devoted friend for how to set these 2 options. If they are set, the need for "--mca btl mx,sm,self" dissapear.

Also, OMPI offers another MX via pml. Performance was better using pml
but George may be getting the btl closer.

HPL send large messages around. For such messages the BTL seems to deliver better performances than the default MTL MX version if your network is at most a 2G.

There are several questions. First of all, am I able to initiate OMPI
over MX jobs from the headnode to be executed on 2 compute nodes even
though the headnode does not have MX hardware?

Yes. The networks are detected at runtime, and Open MPI will try to use most of them or fall back over TCP.

  Thanks,
    george.

Reply via email to