In addition to what Gilles said, if you're using TCP for your MPI transport, 
changing the MTU probably won't have a huge impact on HPL.

Open MPI will automatically react to the MTU size; there shouldn't be anything 
you need to change.  Indeed, when using TCP, the kernel TCP stack is the one 
that actually fragments outgoing messages into MTU-sized chunks.  HPL will 
likely be MPI_Sending large messages, and Open MPI will just write large chunks 
of those large messages down sockets, and the kernel will chop them up into 
1.5K or 9K segments.



On Jul 15, 2020, at 10:24 AM, Gilles GOUAILLARDET via users 
<users@lists.open-mpi.org<mailto:users@lists.open-mpi.org>> wrote:


John,

On a small cluster, HPL is not communication intensive, so you are unlikely to 
see some improvements by tweaking the network.

Instead, I'd rather suggest you run MPI benchmarks such as IMB (from Intel) or 
the OSU suite (from Ohio State University).

Cheers,

Gilles


On July 15, 2020, at 22:16, John Duffy via users 
<users@lists.open-mpi.org<mailto:users@lists.open-mpi.org>> wrote:


Hi

Ubuntu 20.04 aarch64, Open-MPI 4.0.3, HPL 2.3

Having changed the MTU across my small cluster from 1500 to 9000, I’m wondering 
how/if Open-MPI can take advantage of this increased maximum packet size.

ip link show eth0

2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9000 qdisc mq state UP mode 
DEFAULT group default qlen 1000
    link/ether dc:a6:32:60:7b:cd brd ff:ff:ff:ff:ff:ff

Having run a Linpack benchmark before and after the MTU change, it appears to 
have had minimal impact on performance. I was, probably naively, expecting some 
benchmark improvement. Are there any Open-MPI parameters, or compiler options, 
that can tweaked that are related to MTU size?

Kind regards


--
Jeff Squyres
jsquy...@cisco.com<mailto:jsquy...@cisco.com>

Reply via email to