Dear All,

I tried "mpic++” to make wxWidgets library and it doesn’t change anything.

I found the openmpi-1.10.0 on my Mac: OSX 10.9.5 with Apple Clang 6.0 always 
fails to MPI_Finalize even with a very simple program (bottom of this mail).

[Venus:60708] [ 4] Assertion failed: (OPAL_OBJ_MAGIC_ID == ((opal_object_t *) 
(&fl->fl_allocations))->obj_magic_id), function opal_free_list_destruct, file 
../../opal/class/opal_free_list.c, line 70.
[Venus:60709] *** Process received signal ***
[Venus:60709] Signal: Abort trap: 6 (6)
[Venus:60709] Signal code:  (0)
[Venus:60709] [ 0] 0   libsystem_platform.dylib            0x00007fff8e0445aa 
_sigtramp + 26
[Venus:60709] [ 1] 0   ???                                 0x0000000000000000 
0x0 + 0
[Venus:60709] [ 2] 0   libsystem_c.dylib                   0x00007fff9496db1a 
abort + 125
[Venus:60709] [ 3] 0   libsystem_c.dylib                   0x00007fff949379bf 
basename + 0
[Venus:60709] [ 4] 0   libopen-pal.13.dylib                0x000000010c0a9213 
opal_free_list_destruct + 515
[Venus:60709] [ 5] 0   mca_osc_rdma.so                     0x000000010c542ad5 
component_finalize + 101
[Venus:60709] [ 6] 0   libopen-pal.13.dylib                0x000000010f265213 
opal_free_list_destruct + 515
[Venus:60708] [ 5] 0   mca_osc_rdma.so                     0x000000010f6fead5 
component_finalize + 101
[Venus:60708] [ 6] 0   libmpi.12.dylib                     0x000000010bdecf8a 
ompi_osc_base_finalize + 74
[Venus:60709] [ 7] 0   libmpi.12.dylib                     0x000000010efaaf8a 
ompi_osc_base_finalize + 74
[Venus:60708] [ 7] 0   libmpi.12.dylib                     0x000000010bc7e57a 
ompi_mpi_finalize + 2746
[Venus:60709] [ 8] 0   libmpi.12.dylib                     0x000000010bcbb56d 
MPI_Finalize + 125

Regarding my code I mentioned in my original mail, the behaviour is very weird. 
MPI_Isend is called from the different named function, it works.
And I wrote a sample program to try to reproduce my problem but it works fine,  
except the problem of MPI_Finalize.

So I decided to make gcc-5.2 and make openmpi on it, which seems to be a 
recommendation of the FINK project.

2015/10/28 8:29、ABE Hiroshi <hab...@gmail.com> のメール:

> Dear Nathan and all,
> 
> Thank you for your information. I tried it in this morning, it seems to get 
> the same result. I will try another option. Thank you for the key to go in.
> And I found a statement in the FAQ ragarding PETSc which says you should use 
> OpenMPI wrapper compiler. I use wxWidgets library and try to compile with the 
> wrapper.
> 
> 2015/10/27 23:56、Nathan Hjelm <hje...@lanl.gov> のメール:
> 
>> 
>> I have seen hangs when the tcp component is in use. If you are running
>> on a single machine running with mpirun -mca btl self,vader.
>> 
>> -Nathan
>> 
>> On Mon, Oct 26, 2015 at 09:17:20PM -0600, ABE Hiroshi wrote:
>>>  Dear All,
>>> 
>>>  I have a multithread-ed program and as a next step it is reconstructing to
>>>  MPI program. The code is to be MPI / Multithread hybrid one.
>>> 
>>>  The code proceeds MPI-routines as:
>>> 
>>>  1. Send data by MPI_Isend with exlusive tag numbers to the other node.
>>>  This is done in ONE master thread.
>>>  2. Receive the sent data by MPI_Irecv in several threads (usually the same
>>>  as the number of CPU core) and do their jobs.
>>> 
>>>  There is one main thread (main process) and one master thread and several
>>>  working threads in the code. MPI_Isend is called in the master thread.
>>>  MPI_Irecv is called in the working threads.
>>> 
>>>  My problem is MPI_Wait stalls after calling MPI_Isend. MPI_Wait is called
>>>  just after MPI_Isend.  Several time the routines get through, but after
>>>  sending several data MPI_Wait stalls.
>>> 
>>>  Using Xcode debugger, the loop with c->c_signaled at line 70 of
>>>  opal_conidition_wait (opal/threads/condition.h) never escape.
>>> 
>>>  I guess I would make something wrong. I would like to know how to find the
>>>  problem.
>>>  I would be obliged if you'd point the solution or the next direction to be
>>>  investigated for debugging.
>>> 
>>>  My Environment : OSX 10.9.5, Apple LLVM 6.0 (LLVM 3.5svn), OpenMPI 1.10.0
>>>  The thread is wxThread from wxWidgets Library (3.0.2) which is a wrapper
>>>  of pthread.
>>> 
>>>  OpenMPI is configure-ed : --enable-mpi-thread-multiple --enable-debug
>>>  --enable-event-debug
>>>  Please find the detail (config.log and ompi_info -all) attached in this
>>>  mail.
>>> 
>>>  Thank you very much in advance.
>>> 
>>>  Sincerely,

ABE Hiroshi
 from Tokorozawa, JAPAN


#include <iostream>

#include <mpi.h>

#define bufdim        128

int main(int argc, char *argv[])
{
    char buffer[bufdim];
    char id_str[32];
    
    //  mpi :
    MPI::Init(argc,argv);
    MPI::Status status;
    
    int size;
    int rank;
    int tag;
    
    size=MPI::COMM_WORLD.Get_size();
    rank=MPI::COMM_WORLD.Get_rank();
    tag=0;
    
    if (rank==0) {
        printf("%d: we have %d processors\n",rank,size);
        int i;
        i=1;
        for ( ;i<size; ++i) {
            sprintf(buffer,"hello  %d! ",i);
            MPI::COMM_WORLD.Send(buffer,bufdim,MPI::CHAR,i,tag);
        }
        i=1;
        for ( ;i<size; ++i) {
            MPI::COMM_WORLD.Recv(buffer,bufdim,MPI::CHAR,i,tag,status);
            printf("%d: %s\n",rank,buffer);
        }
    }
    else {
        MPI::COMM_WORLD.Recv(buffer,bufdim,MPI::CHAR,0,tag,status);
        
        sprintf(id_str,"processor %d ",rank);
        strncat(buffer,id_str,bufdim-1);
        strncat(buffer,"reporting for duty\n",bufdim-1);
        
        MPI::COMM_WORLD.Send(buffer,bufdim,MPI::CHAR,0,tag);
    }
    MPI::Finalize();
    return 0;
}

Reply via email to