Greetings,
Does the OMPIO library support GPU-Direct IO? NVIDIA seems to suggest that
it does
<https://developer.nvidia.com/blog/accelerating-io-in-the-modern-data-center-magnum-io-storage-partnerships/>,
but I can't find details or examples.
--
Jim Edwards
CESM Software Engine
-January/
> 006417.html)
>
>
> there have been some attempts to deviate from the MPI standard
>
> (e.g. implement what the standard "should" be versus what the standard
> says)
>
> and they were all crushed at a very early stage in Open MPI.
>
>
>
>
Hi,
I have openmpi-2.0.2 builds on two different machines and I have a test
code which works on one machine and does not on the other machine. I'm
struggling to understand why and I hope that by posting here someone may
have some insight.
The test is using mpi derived data types and mpi_alltoall
ase to find the
> answer to this probable FAQ?
>
> Matt
>
> [1] Note, this might be unnecessary, but I got to the point where I wanted
> to see if I *could* do it, rather than *should*.
>
> --
> Matt Thompson
>
> Man Among Men
> Fulcrum of History
>
>
> ___
> users mailing list
> us...@open-mpi.org
> Subscription: http://www.open-mpi.org/mailman/listinfo.cgi/users
> Link to this post:
> http://www.open-mpi.org/community/lists/users/2016/01/28287.php
>
--
Jim Edwards
CESM Software Engineer
National Center for Atmospheric Research
Boulder, CO
comply with the current MPI standard.
>
> As to why Open-MPI wastes CPU cycles testing for datatype validity when
> count=0, that is a question for someone else to answer. Implementations
> have no obligation enforce every letter of the MPI standard.
>
> Jeff
>
> On Wed, Jan
in MPI_Alltoallw (regardless the
> corresponding
> >> count is zero).
> >>
> >> an other way to put this is mpich could/should have failed and/or you
> were
> >> lucky it worked.
> >>
> >> George and Jeff,
> >>
> >> do you f
12, 2016 at 7:55 PM, Jim Edwards wrote:
> Maybe the example is too simple. Here is another one which
> when run on two tasks sends two integers from each task to
> task 0. Task 1 receives nothing. This works with mpich and impi
> but fails with openmpi.
>
> #i
Maybe the example is too simple. Here is another one which
when run on two tasks sends two integers from each task to
task 0. Task 1 receives nothing. This works with mpich and impi
but fails with openmpi.
#include
#include
my_mpi_test(int rank, int ntasks)
{
MPI_Datatype stype, rtype;
NULL as an input of
>> MPI_Alltoallw
>> >> - mpich does accept MPI_DATATYPE_NULL as an input of MPI_Alltoallw *if*
>> >> the corresponding count *is* zero
>> >> - mpich does *not* accept MPI_DATATYPE_NULL as an input of
>> MPI_Alltoallw
>> >> *
fine with other mpi libraries but fails in openmpi. If I set
the datatype to something else say MPI_CHAR - it works fine. I think
that this is a bug in open-mpi - would you agree?
--
Jim Edwards
CESM Software Engineer
National Center for Atmospheric Research
Boulder, CO
10 matches
Mail list logo