Stefan, I don't know if this is related to your issue, but FYI...
> Those are async progress threads - they block unless something requires doing > > >> On Apr 15, 2015, at 8:36 AM, Sasso, John (GE Power & Water, Non-GE) >> <john1.sa...@ge.com> wrote: >> >> I stumbled upon something while using 'ps -eFL' to view threads of >> processes, and Google searches have failed to answer my question. This >> question holds for OpenMPI 1.6.x and even OpenMPI 1.4.x. >> >> For a program which is pure MPI (built and run using OpenMPI) and does not >> implement Pthreads or OpenMP, why is it that each MPI task appears as having >> 3 threads: >> >> UID PID PPID LWP C NLWP SZ RSS PSR STIME TTY TIME CMD >> sasso 20512 20493 20512 99 3 187849 582420 14 11:01 ? 00:26:37 >> /home/sasso/mpi_example.exe >> sasso 20512 20493 20588 0 3 187849 582420 11 11:01 ? 00:00:00 >> /home/sasso/mpi_example.exe >> sasso 20512 20493 20599 0 3 187849 582420 12 11:01 ? 00:00:00 >> /home/sasso/mpi_example.exe >> >> whereas if I compile and run a non-MPI program, 'ps -eFL' shows it running >> as a single thread? >> >> Granted the CPU utilization (C) for 2 of the 3 threads is zero, but the >> threads are bound to different processors (11,12,14). I am curious as to >> why this is, and no complaining that there is a problem. Thanks! >> >> --john -----Original Message----- From: users [mailto:users-boun...@open-mpi.org] On Behalf Of Au Eelis Sent: Thursday, January 07, 2016 7:10 AM To: us...@open-mpi.org Subject: [OMPI users] Singleton process spawns additional thread Hi! I have a weird problem with executing a singleton OpenMPI program, where an additional thread causes full load, while the master thread performs the actual calculations. In contrast, executing "mpirun -np 1 [executable]" performs the same calculation at the same speed but the additional thread is idling. In my understanding, both calculations should behave in the same way (i.e., one working thread) for a program which is simply moving some data around (mainly some MPI_BCAST and MPI_GATHER commands). I could observe this behaviour in OpenMPI 1.10.1 with ifort 16.0.1 and gfortran 5.3.0. I could create a minimal working example, which is appended to this mail. Am I missing something? Best regards, Stefan ----- MWE: Compile this with "mpifort main.f90". When executing with "./a.out", there is thread wasting cycles, while the master thread waits for input. When executing with "mpirun -np 1 ./a.out" this thread is idling. program main use mpi_f08 implicit none integer :: ierror,rank call MPI_Init(ierror) call MPI_Comm_Rank(MPI_Comm_World,rank,ierror) ! let master thread wait on [RETURN]-key if (rank == 0) then read(*,*) end if write(*,*) rank call mpi_barrier(mpi_comm_world, ierror) end program _______________________________________________ users mailing list us...@open-mpi.org Subscription: https://urldefense.proofpoint.com/v2/url?u=http-3A__www.open-2Dmpi.org_mailman_listinfo.cgi_users&d=CwICAg&c=IV_clAzoPDE253xZdHuilRgztyh_RiV3wUrLrDQYWSI&r=tqKZ2vRCLufSSXPvzNxBrKr01YPimBPnb-JT-Js0Fmk&m=NPeEHKik35WrcHGDl5ZRq4IC6Le5g03o5YoqD9InrHw&s=eRYTNaknio7tNJFdOMTqvdlNNIq9p6evJoQxuvmqrLs&e= Link to this post: https://urldefense.proofpoint.com/v2/url?u=http-3A__www.open-2Dmpi.org_community_lists_users_2016_01_28237.php&d=CwICAg&c=IV_clAzoPDE253xZdHuilRgztyh_RiV3wUrLrDQYWSI&r=tqKZ2vRCLufSSXPvzNxBrKr01YPimBPnb-JT-Js0Fmk&m=NPeEHKik35WrcHGDl5ZRq4IC6Le5g03o5YoqD9InrHw&s=2_axdls1JH4Wm5MlkOXRrtXFb2LLVLCleKVx4ybpltU&e=