Sure - but then we aren't talking about containers any more, just vendor vs
OMPI. I'm not getting in the middle of that one!
On Jan 27, 2022, at 6:28 PM, Gilles Gouaillardet via users
mailto:users@lists.open-mpi.org> > wrote:
Thanks Ralph,
Now I get what you had in mind.
Strictly speaking, y
Thanks Ralph,
Now I get what you had in mind.
Strictly speaking, you are making the assumption that Open MPI performance
matches the system MPI performances.
This is generally true for common interconnects and/or those that feature
providers for libfabric or UCX, but not so for "exotic" intercon
See inline
Ralph
On Jan 27, 2022, at 10:05 AM, Brian Dobbins mailto:bdobb...@gmail.com> > wrote:
Hi Ralph,
Thanks again for this wealth of information - we've successfully run the same
container instance across multiple systems without issues, even surpassing
'native' performance in edge ca
Hi Ralph,
Thanks again for this wealth of information - we've successfully run the
same container instance across multiple systems without issues, even
surpassing 'native' performance in edge cases, presumably because the
native host MPI is either older or simply tuned differently (eg, 'eager
li
Just to complete this - there is always a lingering question regarding shared
memory support. There are two ways to resolve that one:
* run one container per physical node, launching multiple procs in each
container. The procs can then utilize shared memory _inside_ the container.
This is the c
> Fair enough Ralph! I was implicitly assuming a "build once / run everywhere"
> use case, my bad for not making my assumption clear.
> If the container is built to run on a specific host, there are indeed other
> options to achieve near native performances.
>
Err...that isn't actually what I m
This is part of the challenge of HPC: there are general solutions, but no
specific silver bullet that works in all scenarios. In short: everyone's setup
is different. So we can offer advice, but not necessarily a 100%-guaranteed
solution that will work in your environment.
In general, we advi
I'm afraid that without any further details, it's hard to help. I don't know
why Gadget2 would complain about its parameters file. From what you've stated,
it could be a problem with the application itself.
Have you talked to the Gadget2 authors?
--
Jeff Squyres
jsquy...@cisco.com
___
Sorry for the noob question, but: what should I configure for OpenMPI
"to perform on the host cluster"? Any link to a guide would be welcome!
Slightly extended rationale for the question: I'm currently using
"unconfigured" Debian packages and getting some strange behaviour...
Maybe it's just s