Hello Curtis, yes, done with ompi-trunk: Apart from --enable-mpi-threads --enable-progress-threads, You need to compile Open MPI with --enable-mca-no-build=memory-ptmalloc2 ; and of course the usual options for debugging (--enable-debug) and the options for icc/ifort/icpc: CFLAGS='-debug all -inline-debug-info -tcheck' CXXFLAGS='-debug all -inline-debug-info -tcheck' FFLAGS='-debug all -tcheck' LDFLAGS='-tcheck'
Then, as You already noted, run the application with --mca btl tcp,sm,self: mpirun --mca tcp,sm,self -np 2 \ tcheck_cl \ --reinstrument \ -u all \ -c \ -d '/tmp/hpcraink_$$__tc_cl_cache' \ -f html \ -o 'tc_mpi_test_suite_$$.html' \ -p 'file=tc_mpi_test_suite_%H_%I, \ pad=128, \ delay=2, \ stall=2' \ -- \ ./mpi_test_suite -j 2 -r FULL -t 'Ring Ibsend' -d MPI_INT -- the reinstrument is not really necessary, also setting the padding and delay for startup of threads; shortenign the delay for stalls to 2 seconds alos does not trigger any deadlocks. This was with icc-9.1 and itt-3.0 23205. Hope this helps, Rainer On Friday 23 March 2007 05:22, Curtis Janssen wrote: > I'm interested in getting OpenMPI working with a multi-threaded > application (MPI_THREAD_MULTIPLE is required). I'm trying the trunk > from a couple weeks ago (1.3a1r14001) compiled for multi-threading and > threaded progress, and have had success with some small cases. Larger > cases with the same algorithms fail (they work with MPICH2 1.0.5/TCP and > other thread-safe MPIs, so I don't think it is an application bug). I > don't mind doing a little work to track down the problem, so I'm trying > to use the Intel Thread Checker. I have the thread checker working with > my application when using Intel's MPI, but with OpenMPI it hangs. > OpenMPI is compiled for OFED 1.1, but I'm overriding communications with > "-gmca btl self,tcp" in the hope that OpenMPI won't do anything funky > that would cause the thread checker problems (like RMDA or writes from > other processes into shared memory segments). Has anybody used the > Intel Thread Checker with OpenMPI successfully? > > Thanks, > Curt -- ---------------------------------------------------------------- Dipl.-Inf. Rainer Keller http://www.hlrs.de/people/keller High Performance Computing Tel: ++49 (0)711-685 6 5858 Center Stuttgart (HLRS) Fax: ++49 (0)711-685 6 5832 POSTAL:Nobelstrasse 19 email: kel...@hlrs.de ACTUAL:Allmandring 30, R.O.030 AIM:rusraink 70550 Stuttgart