I'm using OpenMPI v.4.0.2.
Is your problem similar to mine?
Thanks,
David
On Tue, Apr 14, 2020 at 7:33 AM Patrick Bégou via users <
users@lists.open-mpi.org> wrote:
> Hi David,
>
> could you specify which version of OpenMPI you are using ?
> I've also some parallel I/O trouble with one code but
Hi David,
could you specify which version of OpenMPI you are using ?
I've also some parallel I/O trouble with one code but still have not
investigated.
Thanks
Patrick
Le 13/04/2020 à 17:11, Dong-In Kang via users a écrit :
>
> Thank you for your suggestion.
> I am more concerned about the poor
Thank you for your suggestion.
I am more concerned about the poor performance of one MPI process/socket
case.
The model fits better for my real workload.
The performance that I see is a lot worse than what the underlying hardware
can support.
The best case (all MPI processes in a single socket) is
Note there could be some NUMA-IO effect, so I suggest you compare
running every MPI tasks on socket 0, to running every MPI tasks on
socket 1 and so on, and then compared to running one MPI task per
socket.
Also, what performance do you measure?
- Is this something in line with the filesystem/netw