Thank you Jeff and Gilles.
Kind regards
John
In addition to what Gilles said, if you're using TCP for your MPI transport,
changing the MTU probably won't have a huge impact on HPL.
Open MPI will automatically react to the MTU size; there shouldn't be anything
you need to change. Indeed, when using TCP, the kernel TCP stack is the one
tha
John,
On a small cluster, HPL is not communication intensive, so you are unlikely to
see some improvements by tweaking the network.
Instead, I'd rather suggest you run MPI benchmarks such as IMB (from Intel) or
the OSU suite (from Ohio State University).
Cheers,
Gilles
On July 15, 2020, at 2