Thank you Jeff and Gilles.
Kind regards
John
In addition to what Gilles said, if you're using TCP for your MPI transport,
changing the MTU probably won't have a huge impact on HPL.
Open MPI will automatically react to the MTU size; there shouldn't be anything
you need to change. Indeed, when using TCP, the kernel TCP stack is the one
tha
John,
On a small cluster, HPL is not communication intensive, so you are unlikely to
see some improvements by tweaking the network.
Instead, I'd rather suggest you run MPI benchmarks such as IMB (from Intel) or
the OSU suite (from Ohio State University).
Cheers,
Gilles
On July 15, 2020, at 2
Hi
Ubuntu 20.04 aarch64, Open-MPI 4.0.3, HPL 2.3
Having changed the MTU across my small cluster from 1500 to 9000, I’m wondering
how/if Open-MPI can take advantage of this increased maximum packet size.
ip link show eth0
2: eth0: mtu 9000 qdisc mq state UP mode
DEFAULT group default qlen 10