Eric,
my short answer is no.
long answer is :
- from MPI_Register_datarep()
/* The io framework is only initialized lazily. If it hasn't
already been initialized, do so now (note that MPI_FILE_OPEN
and MPI_FILE_DELETE are the only two places that it will be
initialized). */
- from mca_io_base_register_datarep()
/* Find the maximum additional number of bytes required by all io
components for requests and make that the request size */
OPAL_LIST_FOREACH(cli,
&ompi_io_base_framework.framework_components,
mca_base_component_list_item_t) {
...
}
in your case, since nor MPI_File_open nor MPI_File_delete is invoked,
the ompio component could be disabled.
but that would mean the io component selection is also based on the fact
that MPI_Register_datarep() has
been invoked or not before. i can foresee users complaining about IO
performance discrepancies just because
of one line (e.g. MPI_Register_datarep invokation) in their code.
now if MPI_File_open is invoked first, that means that
MPI_Register_datarep will fail or success based on the selected io
component (and iirc, that could be file(system) dependent within the
same application).
i am open to suggestions, but so far, i do not see a better one (other
than implementing this in OMPIO)
the patch for v1.10 can be downloaded at
https://github.com/ggouaillardet/ompi-release/commit/1589278200d9fb363d61fa20fb39a4c2fa78c942.patch
application will not crash, but fail "nicely" on MPI_Register_datarep
Cheers,
Gilles
On 3/11/2016 12:11 PM, Éric Chamberland wrote:
Thanks Gilles!
it works... I will continue my tests with that command line...
Until OMPIO supports this, is there a way to put a call into the code
to disable ompio the same way --mca io ^ompio does?
Thanks,
Eric
Le 16-03-10 20:13, Gilles Gouaillardet a écrit :
Eric,
I will fix the crash (fwiw, it is already fixed in v2.x and master)
note this program cannot currently run "as is".
by default, there are two frameworks for io : ROMIO and OMPIO.
MPI_Register_datarep does try to register the datarep into all
frameworks,
and successes only if datarep was successfully registered into all
frameworks.
OMPIO does not currently support this
(and the stub is missing in v1.10 so the app does not crash)
your test is successful if you blacklist ompio :
mpirun --mca io ^ompio ./int64
or
OMPI_MCA_io=^romio ./int64
and you do not even need a patch for that :-)
Cheers,
Gilles
On 3/11/2016 4:47 AM, Éric Chamberland wrote:
Hi,
I have a segfault while trying to use MPI_Register_datarep with
openmpi-1.10.2:
mpic++ -g -o int64 int64.cc
./int64
[melkor:24426] *** Process received signal ***
[melkor:24426] Signal: Segmentation fault (11)
[melkor:24426] Signal code: Address not mapped (1)
[melkor:24426] Failing at address: (nil)
[melkor:24426] [ 0] /lib64/libpthread.so.0(+0xf1f0)[0x7f66cfb731f0]
[melkor:24426] *** End of error message ***
Segmentation fault (core dumped)
I have attached the beginning of a test program that use this function.
(and btw a totally different error occur with mpich:
http://lists.mpich.org/pipermail/discuss/2016-March/004586.html)
Can someone help me?
Thanks,
Eric
_______________________________________________
users mailing list
us...@open-mpi.org
Subscription:http://www.open-mpi.org/mailman/listinfo.cgi/users
Link to this
post:http://www.open-mpi.org/community/lists/users/2016/03/28677.php
_______________________________________________
users mailing list
us...@open-mpi.org
Subscription: http://www.open-mpi.org/mailman/listinfo.cgi/users
Link to this post:
http://www.open-mpi.org/community/lists/users/2016/03/28680.php