Hi,
I just did with 1.8.2rc4 and it does the same :

[mboisson@helios-login1 simplearrayhello]$ ./hello
[helios-login1:11739] *** Process received signal ***
[helios-login1:11739] Signal: Segmentation fault (11)
[helios-login1:11739] Signal code: Address not mapped (1)
[helios-login1:11739] Failing at address: 0x30
[helios-login1:11739] [ 0] /lib64/libpthread.so.0[0x381c00f710]
[helios-login1:11739] [ 1] /software-gpu/mpi/openmpi/1.8.2rc4_gcc4.8_cuda6.0.37/lib/libmpi.so.1(+0xfa238)[0x7f7166a04238] [helios-login1:11739] [ 2] /software-gpu/mpi/openmpi/1.8.2rc4_gcc4.8_cuda6.0.37/lib/libmpi.so.1(+0xfbad4)[0x7f7166a05ad4] [helios-login1:11739] [ 3] /software-gpu/mpi/openmpi/1.8.2rc4_gcc4.8_cuda6.0.37/lib/libmpi.so.1(ompi_btl_openib_connect_base_select_for_local_port+0xcf)[0x7f71669ffddf] [helios-login1:11739] [ 4] /software-gpu/mpi/openmpi/1.8.2rc4_gcc4.8_cuda6.0.37/lib/libmpi.so.1(+0xe4773)[0x7f71669ee773] [helios-login1:11739] [ 5] /software-gpu/mpi/openmpi/1.8.2rc4_gcc4.8_cuda6.0.37/lib/libmpi.so.1(mca_btl_base_select+0x168)[0x7f71669e46a8] [helios-login1:11739] [ 6] /software-gpu/mpi/openmpi/1.8.2rc4_gcc4.8_cuda6.0.37/lib/libmpi.so.1(mca_bml_r2_component_init+0x11)[0x7f71669e3fd1] [helios-login1:11739] [ 7] /software-gpu/mpi/openmpi/1.8.2rc4_gcc4.8_cuda6.0.37/lib/libmpi.so.1(mca_bml_base_init+0x7f)[0x7f71669e275f] [helios-login1:11739] [ 8] /software-gpu/mpi/openmpi/1.8.2rc4_gcc4.8_cuda6.0.37/lib/libmpi.so.1(+0x1e602f)[0x7f7166af002f] [helios-login1:11739] [ 9] /software-gpu/mpi/openmpi/1.8.2rc4_gcc4.8_cuda6.0.37/lib/libmpi.so.1(mca_pml_base_select+0x3b6)[0x7f7166aedc26] [helios-login1:11739] [10] /software-gpu/mpi/openmpi/1.8.2rc4_gcc4.8_cuda6.0.37/lib/libmpi.so.1(ompi_mpi_init+0x4e3)[0x7f7166988863] [helios-login1:11739] [11] /software-gpu/mpi/openmpi/1.8.2rc4_gcc4.8_cuda6.0.37/lib/libmpi.so.1(MPI_Init_thread+0x15d)[0x7f71669a86fd]
[helios-login1:11739] [12] ./hello(LrtsInit+0x72)[0x4fcf02]
[helios-login1:11739] [13] ./hello(ConverseInit+0x70)[0x4ff680]
[helios-login1:11739] [14] ./hello(main+0x27)[0x470767]
[helios-login1:11739] [15] /lib64/libc.so.6(__libc_start_main+0xfd)[0x381bc1ed1d]
[helios-login1:11739] [16] ./hello[0x470b71]
[helios-login1:11739] *** End of error message



Maxime

Le 2014-08-14 10:04, Jeff Squyres (jsquyres) a écrit :
Can you try the latest 1.8.2 rc tarball?  (just released yesterday)

     http://www.open-mpi.org/software/ompi/v1.8/



On Aug 14, 2014, at 8:39 AM, Maxime Boissonneault 
<maxime.boissonnea...@calculquebec.ca> wrote:

Hi,
I compiled Charm++ 6.6.0rc3 using
./build charm++ mpi-linux-x86_64 smp --with-production

When compiling the simple example
mpi-linux-x86_64-smp/tests/charm++/simplearrayhello/

I get a segmentation fault that traces back to OpenMPI :
[mboisson@helios-login1 simplearrayhello]$ ./hello
[helios-login1:01813] *** Process received signal ***
[helios-login1:01813] Signal: Segmentation fault (11)
[helios-login1:01813] Signal code: Address not mapped (1)
[helios-login1:01813] Failing at address: 0x30
[helios-login1:01813] [ 0] /lib64/libpthread.so.0[0x381c00f710]
[helios-login1:01813] [ 1] 
/software-gpu/mpi/openmpi/1.8.1_gcc4.8_cuda6.0.37/lib/libmpi.so.1(+0xf78f8)[0x7f2cd1f6b8f8]
[helios-login1:01813] [ 2] 
/software-gpu/mpi/openmpi/1.8.1_gcc4.8_cuda6.0.37/lib/libmpi.so.1(+0xf8f64)[0x7f2cd1f6cf64]
[helios-login1:01813] [ 3] 
/software-gpu/mpi/openmpi/1.8.1_gcc4.8_cuda6.0.37/lib/libmpi.so.1(ompi_btl_openib_connect_base_select_for_local_port+0xcf)[0x7f2cd1f672af]
[helios-login1:01813] [ 4] 
/software-gpu/mpi/openmpi/1.8.1_gcc4.8_cuda6.0.37/lib/libmpi.so.1(+0xe1ad7)[0x7f2cd1f55ad7]
[helios-login1:01813] [ 5] 
/software-gpu/mpi/openmpi/1.8.1_gcc4.8_cuda6.0.37/lib/libmpi.so.1(mca_btl_base_select+0x168)[0x7f2cd1f4bf28]
[helios-login1:01813] [ 6] 
/software-gpu/mpi/openmpi/1.8.1_gcc4.8_cuda6.0.37/lib/libmpi.so.1(mca_bml_r2_component_init+0x11)[0x7f2cd1f4b851]
[helios-login1:01813] [ 7] 
/software-gpu/mpi/openmpi/1.8.1_gcc4.8_cuda6.0.37/lib/libmpi.so.1(mca_bml_base_init+0x7f)[0x7f2cd1f4a03f]
[helios-login1:01813] [ 8] 
/software-gpu/mpi/openmpi/1.8.1_gcc4.8_cuda6.0.37/lib/libmpi.so.1(+0x1e0d17)[0x7f2cd2054d17]
[helios-login1:01813] [ 9] 
/software-gpu/mpi/openmpi/1.8.1_gcc4.8_cuda6.0.37/lib/libmpi.so.1(mca_pml_base_select+0x3b6)[0x7f2cd20529d6]
[helios-login1:01813] [10] 
/software-gpu/mpi/openmpi/1.8.1_gcc4.8_cuda6.0.37/lib/libmpi.so.1(ompi_mpi_init+0x4e4)[0x7f2cd1ef0c14]
[helios-login1:01813] [11] 
/software-gpu/mpi/openmpi/1.8.1_gcc4.8_cuda6.0.37/lib/libmpi.so.1(MPI_Init_thread+0x15d)[0x7f2cd1f1065d]
[helios-login1:01813] [12] ./hello(LrtsInit+0x72)[0x4fcf02]
[helios-login1:01813] [13] ./hello(ConverseInit+0x70)[0x4ff680]
[helios-login1:01813] [14] ./hello(main+0x27)[0x470767]
[helios-login1:01813] [15] 
/lib64/libc.so.6(__libc_start_main+0xfd)[0x381bc1ed1d]
[helios-login1:01813] [16] ./hello[0x470b71]


Anyone has a clue how to fix this ?

Thanks,

--
---------------------------------
Maxime Boissonneault
Analyste de calcul - Calcul Québec, Université Laval
Ph. D. en physique

_______________________________________________
users mailing list
us...@open-mpi.org
Subscription: http://www.open-mpi.org/mailman/listinfo.cgi/users
Link to this post: 
http://www.open-mpi.org/community/lists/users/2014/08/25014.php



--
---------------------------------
Maxime Boissonneault
Analyste de calcul - Calcul Québec, Université Laval
Ph. D. en physique

Reply via email to