Hello, I've compiled Open MPI 1.6.3 with --enable-mpi-thread-multiple -with-tm -with-openib --enable-opal-multi-threads.
When I use for example the pingpong benchmark from the Intel MPI Benchmarks, which call MPI_Init the btl openib is used and everything works fine. When instead the benchmark calls MPI_Thread_init with MPI_THREAD_MULTIPLE as requested threading level the btl openib fails to load but gives no further hints for the reason: mpirun -v -n 2 -npernode 1 -gmca btl_base_verbose 200 ./imb- tm-openmpi-ts pingpong ... [l0519:08267] select: initializing btl component openib [l0519:08267] select: init of component openib returned failure [l0519:08267] select: module openib unloaded ... The question is now, is currently just the support for MPI_THREADM_MULTIPLE missing in the openib module or are there other errors occurring and if so, how to identify them. Attached ist the config.log from the Open MPI build, the ompi_info output and the output of the IMB pingpong bechmarks. As system used were two nodes with: - OpenFabrics 1.5.3 - CentOS release 5.8 (Final) - Linux Kernel 2.6.18-308.11.1.el5 x86_64 - OpenSM 3.3.3 [l0519] src > ibv_devinfo hca_id: mlx4_0 transport: InfiniBand (0) fw_ver: 2.7.000 node_guid: 0030:48ff:fff6:31e4 sys_image_guid: 0030:48ff:fff6:31e7 vendor_id: 0x02c9 vendor_part_id: 26428 hw_ver: 0xB0 board_id: SM_2122000001000 phys_port_cnt: 1 port: 1 state: PORT_ACTIVE (4) max_mtu: 2048 (4) active_mtu: 2048 (4) sm_lid: 48 port_lid: 278 port_lmc: 0x00 Thanks for the help in advance. Regards, Markus -- Markus Wittmann, HPC Services Friedrich-Alexander-Universität Erlangen-Nürnberg Regionales Rechenzentrum Erlangen (RRZE) Martensstrasse 1, 91058 Erlangen, Germany Tel.: +49 9131 85-20104 markus.wittm...@fau.de http://www.rrze.fau.de/hpc/
imb.txt.bz2
Description: application/bzip
imb-tm.txt.bz2
Description: application/bzip
ompi_info.txt.bz2
Description: application/bzip
config.log.bz2
Description: application/bzip