Why do the null handles not follow a consistent scheme, at least in
Open-MPI 4.1.2?
ompi_mpi__null is used except when handle={request,message}, which
drop the "mpi_".
The above have an associated ..null_addr except ompi_mpi_datatype_null and
ompi_message_null.
Why?
Jeff
Open MPI v4.1.2, packa
You can use MPI_Abort(MPI_COMM_SELF,0) to exit a process locally.
This may abort world if errors are fatal but either way, it’s not going to
synchronize before processes go poof.
Jeff
On Fri 9. Sep 2022 at 21.34 Mccall, Kurt E. (MSFC-EV41) via users <
users@lists.open-mpi.org> wrote:
> Hi,
>
>
Alltoallv has both a large count and large displacement problem in the API.
You can work around the latter by using neighborhood alltoallv using a
duplicate of your original communicator that’s neighborhood compatible.
Neighborhood collectives use MPI_Aint displacements instead of int.
If you need
Calling attribute functions segfaults the application starting in
v5.0.0rc3. This is really bad for users, because the segfault happens in
application code, so it takes a while to figuring out what is wrong. I
spent an entire day bisecting your tags before I figured out what was
happening.
https
RISC-V node. It will generate a config.cache file.
>
> Then you can
>
> grep ^ompi_cv_fortran_ config.cache
>
> to generate the file you can pass to --with-cross when cross compiling
> on your x86 system
>
>
> Cheers,
>
>
> Gilles
>
>
> On 9/7/2021 7:35 PM, Jeff Ham
I am attempting to cross-compile Open-MPI for RISC-V on an x86 system. I
get this error, with which I have some familiarity:
checking size of Fortran CHARACTER... configure: error: Can not determine
size of CHARACTER when cross-compiling
I know that I need to specify the size explicitly using a
I am running on a single node and do not need any network support. I am
using the NVIDIA build of Open-MPI 3.1.5. How do I tell it to never use
anything related to IB? It seems that ^openib is not enough.
Thanks,
Jeff
$ OMP_NUM_THREADS=1
/proj/nv/Linux_aarch64/21.5/comm_libs/openmpi/openmpi-3
It's not about Open-MPI but I know of only one book on the internals of
MPI: "Inside the Message Passing Interface: Creating Fast Communication
Libraries" by Alexander Supalov.
I found it useful for understanding how MPI libraries are implemented. It
is no substitute for spending hours reading so
On Thu, Aug 20, 2020 at 3:22 AM Carlo Nervi via users <
users@lists.open-mpi.org> wrote:
> Dear OMPI community,
> I'm a simple end-user with no particular experience.
> I compile quantum chemical programs and use them in parallel.
>
Which code? Some QC codes behave differently than traditional M
To be more strictly equivalent, you will want to add -D_REENTRANT to add to
the substitution, but this may not affect hcoll.
https://stackoverflow.com/questions/2127797/significance-of-pthread-flag-when-compiling/2127819#2127819
The proper fix here is a change in OMPI build system, of course, to
“Supposedly faster” isn’t a particularly good reason to change MPI
implementations but canceling sends is hard for reasons that have nothing
to do with performance.
Also, I’d not be so eager to question the effectiveness of Open-MPI on
InfiniBand. Check the commit logs for Mellanox employees some
Don’t try to cancel sends.
https://github.com/mpi-forum/mpi-issues/issues/27 has some useful info.
Jeff
On Wed, Oct 2, 2019 at 7:17 AM Christian Von Kutzleben via users <
users@lists.open-mpi.org> wrote:
> Hi,
>
> I’m currently evaluating to use openmpi (4.0.1) in our application.
>
> We are us
On Tue, Aug 6, 2019 at 9:54 AM Emmanuel Thomé via users <
users@lists.open-mpi.org> wrote:
> Hi,
>
> In the attached program, the MPI_Allgather() call fails to communicate
> all data (the amount it communicates wraps around at 4G...). I'm running
> on an omnipath cluster (2018 hardware), openmpi
The snippets suggest you were storing a reference to an object on the
stack. Stack variables go out of scope when the function returns. Using a
reference to them out-of-scope is illegal but often fails
nondeterministically. Good compilers will issue a warning about this under
the right conditions (
14 matches
Mail list logo