Dear Nathan and all, Thank you for your information. I tried it in this morning, it seems to get the same result. I will try another option. Thank you for the key to go in. And I found a statement in the FAQ ragarding PETSc which says you should use OpenMPI wrapper compiler. I use wxWidgets library and try to compile with the wrapper. 2015/10/27 23:56、Nathan Hjelm <hje...@lanl.gov> のメール:
> > I have seen hangs when the tcp component is in use. If you are running > on a single machine running with mpirun -mca btl self,vader. > > -Nathan > > On Mon, Oct 26, 2015 at 09:17:20PM -0600, ABE Hiroshi wrote: >> Dear All, >> >> I have a multithread-ed program and as a next step it is reconstructing to >> MPI program. The code is to be MPI / Multithread hybrid one. >> >> The code proceeds MPI-routines as: >> >> 1. Send data by MPI_Isend with exlusive tag numbers to the other node. >> This is done in ONE master thread. >> 2. Receive the sent data by MPI_Irecv in several threads (usually the same >> as the number of CPU core) and do their jobs. >> >> There is one main thread (main process) and one master thread and several >> working threads in the code. MPI_Isend is called in the master thread. >> MPI_Irecv is called in the working threads. >> >> My problem is MPI_Wait stalls after calling MPI_Isend. MPI_Wait is called >> just after MPI_Isend. Several time the routines get through, but after >> sending several data MPI_Wait stalls. >> >> Using Xcode debugger, the loop with c->c_signaled at line 70 of >> opal_conidition_wait (opal/threads/condition.h) never escape. >> >> I guess I would make something wrong. I would like to know how to find the >> problem. >> I would be obliged if you'd point the solution or the next direction to be >> investigated for debugging. >> >> My Environment : OSX 10.9.5, Apple LLVM 6.0 (LLVM 3.5svn), OpenMPI 1.10.0 >> The thread is wxThread from wxWidgets Library (3.0.2) which is a wrapper >> of pthread. >> >> OpenMPI is configure-ed : --enable-mpi-thread-multiple --enable-debug >> --enable-event-debug >> Please find the detail (config.log and ompi_info -all) attached in this >> mail. >> >> Thank you very much in advance. >> >> Sincerely, >> >> ABE Hiroshi >> from Tokorozawa, JAPAN >> >> _______________________________________________ >> users mailing list >> us...@open-mpi.org >> Subscription: http://www.open-mpi.org/mailman/listinfo.cgi/users >> Link to this post: >> http://www.open-mpi.org/community/lists/users/2015/10/27923.php > > > _______________________________________________ > users mailing list > us...@open-mpi.org > Subscription: http://www.open-mpi.org/mailman/listinfo.cgi/users > Link to this post: > http://www.open-mpi.org/community/lists/users/2015/10/27927.php ABE Hiroshi from Tokorozawa, JAPAN