John, On a small cluster, HPL is not communication intensive, so you are unlikely to see some improvements by tweaking the network.
Instead, I'd rather suggest you run MPI benchmarks such as IMB (from Intel) or the OSU suite (from Ohio State University). Cheers, Gilles On July 15, 2020, at 22:16, John Duffy via users <users@lists.open-mpi.org> wrote: Hi Ubuntu 20.04 aarch64, Open-MPI 4.0.3, HPL 2.3 Having changed the MTU across my small cluster from 1500 to 9000, I’m wondering how/if Open-MPI can take advantage of this increased maximum packet size. ip link show eth0 2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9000 qdisc mq state UP mode DEFAULT group default qlen 1000 link/ether dc:a6:32:60:7b:cd brd ff:ff:ff:ff:ff:ff Having run a Linpack benchmark before and after the MTU change, it appears to have had minimal impact on performance. I was, probably naively, expecting some benchmark improvement. Are there any Open-MPI parameters, or compiler options, that can tweaked that are related to MTU size? Kind regards