Hi Alex,
We are currently on the process of improving thread safety in Open
MPI. First we need to know which release you have been using to get
into the problem.
the --enable-progress-threads enables an internal feature of Open MPI
to progress
non blocking communications during computations on devices that do not
support dma transfer (mostly tcp and sm). It has known issues and
should not work correctly before the 1.3 release (though hopefully we
plan to fix it in trunk during the next month or so).
The good news are that you do not need this flag to play with funneled
threads. Just use the -enable-mpi-threads should be enough and should
(as far as our tests go) work fine.
Let us know of any problem you encounter, as we are working on it we
are really greedy of bug reports.
Thanks,
Aurelien
Le 18 févr. 08 à 10:30, Alexandru-Adrian TANTAR a écrit :
Hi everyone,
I have a problem regarding funnel-threaded OpenMPI-based applications.
The application I try to launch (nothing complicated) blocks during
the execution from time to time. And I have to say this is quite a
fun-breaker :D.
I will try to put this in a very simple way: I have the following
code which does nothing more than a "request" for a funneled
"environment" which, once initialized, is right away shut down:
#include <mpi.h>
#include <iostream>
#include <cassert>
using namespace std;
int main(int argc, char** argv) {
int provided = MPI_THREAD_FUNNELED;
MPI_Init_thread ( &argc, &argv, MPI_THREAD_FUNNELED,
&provided );
assert (provided == MPI_THREAD_FUNNELED);
MPI_Finalize();
return 0;
}
For the compilation, I did not specify anything special: mpicxx
example.cpp -o example
When launching, in order to test, I used a loop like the following:
for ((i=0;i<100;i++)); do echo $i "<--------- "; mpiexec -n 2 ./
example; done
Now, the thing is that this usually does not go further than at most
the 30th iteration. And, of course, I get this also by launching
manually, just that it takes more time to get there ;).
I would extremely appreciate if someone can give a hint on this. Is
there anything special that I should look for, is there a
compilation switch I should turn on, etc.? I get the same behavior
on bi-cores, 4x-cores, different environments... I don't know if
this helps but the line I used for configuring the OpenMPI package
is the following:
./configure --prefix=/opt/globus/openmpi/ --enable-mpi-cxx
--enable-shared --enable-smp-locks --enable-cxx-exceptions
--enable-mpi-threads --enable-progress-threads --enable-io-romio
Thanks in advance for your time and looking forward to your answer(s)!
Alex
_______________________________________________
users mailing list
us...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/users
Dr. Aurélien Bouteiller
Sr. Research Associate - Innovative Computing Laboratory
Suite 350, 1122 Volunteer Boulevard
Knoxville, TN 37996
865 974 6321