SOLVED: Thank you Ralph for your clarification:
I realized that the problem was due to some my wrong command but I was
not able to figure out where was the error!
So I can use hybrid programming on my cluster.
Thank you to all,
Fedele

Il giorno ven, 26/06/2015 alle 04.42 -0700, Ralph Castain ha scritto:
> Starting in the 1.7 series, Open MPI automatically binds application
> processes. By default, we bind to core if np <= 2, otherwise we bind
> to socket.. So your proc, and all its threads, are being bound to a
> single core.
> 
> 
> What you probably want to do is add either "--bind-to none" or
> "--bind-to socket" to your mpirun cmd line.
> 
> 
> 
> On Fri, Jun 26, 2015 at 2:00 AM, Fedele Stabile
> <fedele.stab...@fis.unical.it> wrote:
>         Hi,
>         I'm trying hybrid programming and I have this strange issue:
>         Running fortran code listed below it happens that it uses only
>         the 200%
>         of cpu on each node also if I request 4 threads with the
>         command
>         mpirun -n 2 -npernode 1  -x
>         OMP_NUM_THREADS=4 ./pi_parallel_do.f.exe
>         I'll explain: four threads are created but it works as if they
>         were only
>         two cores available
>         
>         however if I run the OpenMP version it loads 400% of cpu, so
>         it works on
>         four core.
>         This is the code and below is the output of ompi_info (as
>         requested by
>         Howard Pritchard)
>         $ cat pi_parallel_do.f
>                 PROGRAM Compute_PI
>                    IMPLICIT NONE
>                 include "mpif.h"
>                 integer numprocs, rank, ierr
>                    INTEGER*8           N, i
>                    DOUBLE PRECISION  w, x, sum
>                    DOUBLE PRECISION  pi, mypi
>                    double precision n_mpi, pi_mpi
>                 call MPI_Init(ierr)
>                 call MPI_Comm_size(MPI_COMM_WORLD, numprocs, ierr)
>                 call MPI_Comm_rank(MPI_COMM_WORLD, rank, ierr)
>                    N = 500000000         !! Number of intervals
>                    w = 1.0d0/(1.d0*N)          !! width of each
>         interval
>                    sum = 0.0d0
>                 pi_mpi = 0.0
>         !$OMP    PARALLEL PRIVATE(x, mypi)
>                    mypi = 0.0d0
>         !$OMP    DO
>                    DO i = 0, N-1                !! Parallel Loop
>                      x = w * (i + 0.5d0)
>                      mypi = mypi + w*4.d0/(1.d0 + x * x)
>                    END DO
>         !$OMP    END DO
>         !$OMP CRITICAL
>                    pi_mpi = pi_mpi + mypi
>         !$OMP END CRITICAL
>         !$OMP    END PARALLEL
>                 call mpi_reduce(pi_mpi, pi, 1, MPI_DOUBLE_PRECISION,
>         MPI_SUM, 0,
>         MPI_COMM_WORLD, ierr)
>                    PRINT *, "Pi = ", pi
>                 call MPI_Finalize(ierr)
>                    END PROGRAM
>         
>         and output of ompi_info:
>         $ ompi_info
>                          Package: Open MPI root@newton-s Distribution
>                         Open MPI: 1.8.4
>           Open MPI repo revision: v1.8.3-330-g0344f04
>            Open MPI release date: Dec 19, 2014
>                         Open RTE: 1.8.4
>           Open RTE repo revision: v1.8.3-330-g0344f04
>            Open RTE release date: Dec 19, 2014
>                             OPAL: 1.8.4
>               OPAL repo revision: v1.8.3-330-g0344f04
>                OPAL release date: Dec 19, 2014
>                          MPI API: 3.0
>                     Ident string: 1.8.4
>                           Prefix: /data/apps/mpi/openmpi-1.8.4-gnu
>          Configured architecture: x86_64-unknown-linux-gnu
>                   Configure host: newton-s
>                    Configured by: root
>                    Configured on: Mon Apr 13 18:29:51 CEST 2015
>                   Configure host: newton-s
>                         Built by: root
>                         Built on: lun 13 apr 2015, 18.42.15, CEST
>                       Built host: newton-s
>                       C bindings: yes
>                     C++ bindings: yes
>                      Fort mpif.h: yes (all)
>                     Fort use mpi: yes (limited: overloading)
>                Fort use mpi size: deprecated-ompi-info-value
>                 Fort use mpi_f08: no
>          Fort mpi_f08 compliance: The mpi_f08 module was not built
>           Fort mpi_f08 subarrays: no
>                    Java bindings: no
>           Wrapper compiler rpath: runpath
>                       C compiler: gcc
>              C compiler absolute: /usr/bin/gcc
>           C compiler family name: GNU
>               C compiler version: 4.4.7
>                     C++ compiler: g++
>            C++ compiler absolute: /usr/bin/g++
>                    Fort compiler: gfortran
>                Fort compiler abs: /usr/bin/gfortran
>                  Fort ignore TKR: no
>            Fort 08 assumed shape: no
>               Fort optional args: no
>                   Fort INTERFACE: yes
>             Fort ISO_FORTRAN_ENV: no
>                Fort STORAGE_SIZE: no
>               Fort BIND(C) (all): no
>               Fort ISO_C_BINDING: yes
>          Fort SUBROUTINE BIND(C): no
>                Fort TYPE,BIND(C): no
>          Fort T,BIND(C,name="a"): no
>                     Fort PRIVATE: no
>                   Fort PROTECTED: no
>                    Fort ABSTRACT: no
>                Fort ASYNCHRONOUS: no
>                   Fort PROCEDURE: no
>                    Fort C_FUNLOC: no
>          Fort f08 using wrappers: no
>                  Fort MPI_SIZEOF: no
>                      C profiling: yes
>                    C++ profiling: yes
>            Fort mpif.h profiling: yes
>           Fort use mpi profiling: yes
>            Fort use mpi_f08 prof: no
>                   C++ exceptions: no
>                   Thread support: posix (MPI_THREAD_MULTIPLE: yes,
>         OPAL support:
>         yes,
>                                   OMPI progress: no, ORTE progress:
>         yes, Event
>         lib:
>                                   yes)
>                    Sparse Groups: no
>           Internal debug support: no
>           MPI interface warnings: yes
>              MPI parameter check: runtime
>         Memory profiling support: no
>         Memory debugging support: no
>                  libltdl support: yes
>            Heterogeneous support: no
>          mpirun default --prefix: yes
>                  MPI I/O support: yes
>                MPI_WTIME support: gettimeofday
>              Symbol vis. support: yes
>            Host topology support: yes
>                   MPI extensions:
>            FT Checkpoint support: no (checkpoint thread: no)
>            C/R Enabled Debugging: no
>              VampirTrace support: yes
>           MPI_MAX_PROCESSOR_NAME: 256
>             MPI_MAX_ERROR_STRING: 256
>              MPI_MAX_OBJECT_NAME: 64
>                 MPI_MAX_INFO_KEY: 36
>                 MPI_MAX_INFO_VAL: 256
>                MPI_MAX_PORT_NAME: 1024
>           MPI_MAX_DATAREP_STRING: 128
>                    MCA backtrace: execinfo (MCA v2.0, API v2.0,
>         Component
>         v1.8.4)
>                     MCA compress: bzip (MCA v2.0, API v2.0, Component
>         v1.8.4)
>                     MCA compress: gzip (MCA v2.0, API v2.0, Component
>         v1.8.4)
>                          MCA crs: none (MCA v2.0, API v2.0, Component
>         v1.8.4)
>                           MCA db: hash (MCA v2.0, API v1.0, Component
>         v1.8.4)
>                           MCA db: print (MCA v2.0, API v1.0, Component
>         v1.8.4)
>                        MCA event: libevent2021 (MCA v2.0, API v2.0,
>         Component
>         v1.8.4)
>                        MCA hwloc: hwloc191 (MCA v2.0, API v2.0,
>         Component
>         v1.8.4)
>                           MCA if: posix_ipv4 (MCA v2.0, API v2.0,
>         Component
>         v1.8.4)
>                           MCA if: linux_ipv6 (MCA v2.0, API v2.0,
>         Component
>         v1.8.4)
>                  MCA installdirs: env (MCA v2.0, API v2.0, Component
>         v1.8.4)
>                  MCA installdirs: config (MCA v2.0, API v2.0,
>         Component v1.8.4)
>                       MCA memory: linux (MCA v2.0, API v2.0, Component
>         v1.8.4)
>                        MCA pstat: linux (MCA v2.0, API v2.0, Component
>         v1.8.4)
>                          MCA sec: basic (MCA v2.0, API v1.0, Component
>         v1.8.4)
>                        MCA shmem: mmap (MCA v2.0, API v2.0, Component
>         v1.8.4)
>                        MCA shmem: posix (MCA v2.0, API v2.0, Component
>         v1.8.4)
>                        MCA shmem: sysv (MCA v2.0, API v2.0, Component
>         v1.8.4)
>                        MCA timer: linux (MCA v2.0, API v2.0, Component
>         v1.8.4)
>                          MCA dfs: app (MCA v2.0, API v1.0, Component
>         v1.8.4)
>                          MCA dfs: orted (MCA v2.0, API v1.0, Component
>         v1.8.4)
>                          MCA dfs: test (MCA v2.0, API v1.0, Component
>         v1.8.4)
>                       MCA errmgr: default_app (MCA v2.0, API v3.0,
>         Component
>         v1.8.4)
>                       MCA errmgr: default_hnp (MCA v2.0, API v3.0,
>         Component
>         v1.8.4)
>                       MCA errmgr: default_orted (MCA v2.0, API v3.0,
>         Component
>                                   v1.8.4)
>                       MCA errmgr: default_tool (MCA v2.0, API v3.0,
>         Component
>         v1.8.4)
>                          MCA ess: env (MCA v2.0, API v3.0, Component
>         v1.8.4)
>                          MCA ess: hnp (MCA v2.0, API v3.0, Component
>         v1.8.4)
>                          MCA ess: singleton (MCA v2.0, API v3.0,
>         Component
>         v1.8.4)
>                          MCA ess: slurm (MCA v2.0, API v3.0, Component
>         v1.8.4)
>                          MCA ess: tm (MCA v2.0, API v3.0, Component
>         v1.8.4)
>                          MCA ess: tool (MCA v2.0, API v3.0, Component
>         v1.8.4)
>                        MCA filem: raw (MCA v2.0, API v2.0, Component
>         v1.8.4)
>                      MCA grpcomm: bad (MCA v2.0, API v2.0, Component
>         v1.8.4)
>                          MCA iof: hnp (MCA v2.0, API v2.0, Component
>         v1.8.4)
>                          MCA iof: mr_hnp (MCA v2.0, API v2.0,
>         Component v1.8.4)
>                          MCA iof: mr_orted (MCA v2.0, API v2.0,
>         Component
>         v1.8.4)
>                          MCA iof: orted (MCA v2.0, API v2.0, Component
>         v1.8.4)
>                          MCA iof: tool (MCA v2.0, API v2.0, Component
>         v1.8.4)
>                         MCA odls: default (MCA v2.0, API v2.0,
>         Component v1.8.4)
>                          MCA oob: tcp (MCA v2.0, API v2.0, Component
>         v1.8.4)
>                          MCA plm: isolated (MCA v2.0, API v2.0,
>         Component
>         v1.8.4)
>                          MCA plm: rsh (MCA v2.0, API v2.0, Component
>         v1.8.4)
>                          MCA plm: slurm (MCA v2.0, API v2.0, Component
>         v1.8.4)
>                          MCA plm: tm (MCA v2.0, API v2.0, Component
>         v1.8.4)
>                          MCA ras: loadleveler (MCA v2.0, API v2.0,
>         Component
>         v1.8.4)
>                          MCA ras: simulator (MCA v2.0, API v2.0,
>         Component
>         v1.8.4)
>                          MCA ras: slurm (MCA v2.0, API v2.0, Component
>         v1.8.4)
>                          MCA ras: tm (MCA v2.0, API v2.0, Component
>         v1.8.4)
>                        MCA rmaps: lama (MCA v2.0, API v2.0, Component
>         v1.8.4)
>                        MCA rmaps: mindist (MCA v2.0, API v2.0,
>         Component v1.8.4)
>                        MCA rmaps: ppr (MCA v2.0, API v2.0, Component
>         v1.8.4)
>                        MCA rmaps: rank_file (MCA v2.0, API v2.0,
>         Component
>         v1.8.4)
>                        MCA rmaps: resilient (MCA v2.0, API v2.0,
>         Component
>         v1.8.4)
>                        MCA rmaps: round_robin (MCA v2.0, API v2.0,
>         Component
>         v1.8.4)
>                        MCA rmaps: seq (MCA v2.0, API v2.0, Component
>         v1.8.4)
>                        MCA rmaps: staged (MCA v2.0, API v2.0,
>         Component v1.8.4)
>                          MCA rml: oob (MCA v2.0, API v2.0, Component
>         v1.8.4)
>                       MCA routed: binomial (MCA v2.0, API v2.0,
>         Component
>         v1.8.4)
>                       MCA routed: debruijn (MCA v2.0, API v2.0,
>         Component
>         v1.8.4)
>                       MCA routed: direct (MCA v2.0, API v2.0,
>         Component v1.8.4)
>                       MCA routed: radix (MCA v2.0, API v2.0, Component
>         v1.8.4)
>                        MCA state: app (MCA v2.0, API v1.0, Component
>         v1.8.4)
>                        MCA state: hnp (MCA v2.0, API v1.0, Component
>         v1.8.4)
>                        MCA state: novm (MCA v2.0, API v1.0, Component
>         v1.8.4)
>                        MCA state: orted (MCA v2.0, API v1.0, Component
>         v1.8.4)
>                        MCA state: staged_hnp (MCA v2.0, API v1.0,
>         Component
>         v1.8.4)
>                        MCA state: staged_orted (MCA v2.0, API v1.0,
>         Component
>         v1.8.4)
>                        MCA state: tool (MCA v2.0, API v1.0, Component
>         v1.8.4)
>                    MCA allocator: basic (MCA v2.0, API v2.0, Component
>         v1.8.4)
>                    MCA allocator: bucket (MCA v2.0, API v2.0,
>         Component v1.8.4)
>                         MCA bcol: basesmuma (MCA v2.0, API v2.0,
>         Component
>         v1.8.4)
>                         MCA bcol: ptpcoll (MCA v2.0, API v2.0,
>         Component v1.8.4)
>                          MCA bml: r2 (MCA v2.0, API v2.0, Component
>         v1.8.4)
>                          MCA btl: openib (MCA v2.0, API v2.0,
>         Component v1.8.4)
>                          MCA btl: self (MCA v2.0, API v2.0, Component
>         v1.8.4)
>                          MCA btl: sm (MCA v2.0, API v2.0, Component
>         v1.8.4)
>                          MCA btl: smcuda (MCA v2.0, API v2.0,
>         Component v1.8.4)
>                          MCA btl: tcp (MCA v2.0, API v2.0, Component
>         v1.8.4)
>                          MCA btl: vader (MCA v2.0, API v2.0, Component
>         v1.8.4)
>                         MCA coll: basic (MCA v2.0, API v2.0, Component
>         v1.8.4)
>                         MCA coll: hierarch (MCA v2.0, API v2.0,
>         Component
>         v1.8.4)
>                         MCA coll: inter (MCA v2.0, API v2.0, Component
>         v1.8.4)
>                         MCA coll: libnbc (MCA v2.0, API v2.0,
>         Component v1.8.4)
>                         MCA coll: ml (MCA v2.0, API v2.0, Component
>         v1.8.4)
>                         MCA coll: self (MCA v2.0, API v2.0, Component
>         v1.8.4)
>                         MCA coll: sm (MCA v2.0, API v2.0, Component
>         v1.8.4)
>                         MCA coll: tuned (MCA v2.0, API v2.0, Component
>         v1.8.4)
>                          MCA dpm: orte (MCA v2.0, API v2.0, Component
>         v1.8.4)
>                         MCA fbtl: posix (MCA v2.0, API v2.0, Component
>         v1.8.4)
>                        MCA fcoll: dynamic (MCA v2.0, API v2.0,
>         Component v1.8.4)
>                        MCA fcoll: individual (MCA v2.0, API v2.0,
>         Component
>         v1.8.4)
>                        MCA fcoll: static (MCA v2.0, API v2.0,
>         Component v1.8.4)
>                        MCA fcoll: two_phase (MCA v2.0, API v2.0,
>         Component
>         v1.8.4)
>                        MCA fcoll: ylib (MCA v2.0, API v2.0, Component
>         v1.8.4)
>                           MCA fs: ufs (MCA v2.0, API v2.0, Component
>         v1.8.4)
>                           MCA io: ompio (MCA v2.0, API v2.0, Component
>         v1.8.4)
>                           MCA io: romio (MCA v2.0, API v2.0, Component
>         v1.8.4)
>                        MCA mpool: gpusm (MCA v2.0, API v2.0, Component
>         v1.8.4)
>                        MCA mpool: grdma (MCA v2.0, API v2.0, Component
>         v1.8.4)
>                        MCA mpool: rgpusm (MCA v2.0, API v2.0,
>         Component v1.8.4)
>                        MCA mpool: sm (MCA v2.0, API v2.0, Component
>         v1.8.4)
>                          MCA mtl: psm (MCA v2.0, API v2.0, Component
>         v1.8.4)
>                          MCA osc: rdma (MCA v2.0, API v3.0, Component
>         v1.8.4)
>                          MCA osc: sm (MCA v2.0, API v3.0, Component
>         v1.8.4)
>                          MCA pml: v (MCA v2.0, API v2.0, Component
>         v1.8.4)
>                          MCA pml: bfo (MCA v2.0, API v2.0, Component
>         v1.8.4)
>                          MCA pml: cm (MCA v2.0, API v2.0, Component
>         v1.8.4)
>                          MCA pml: ob1 (MCA v2.0, API v2.0, Component
>         v1.8.4)
>                       MCA pubsub: orte (MCA v2.0, API v2.0, Component
>         v1.8.4)
>                       MCA rcache: vma (MCA v2.0, API v2.0, Component
>         v1.8.4)
>                          MCA rte: orte (MCA v2.0, API v2.0, Component
>         v1.8.4)
>                         MCA sbgp: basesmsocket (MCA v2.0, API v2.0,
>         Component
>         v1.8.4)
>                         MCA sbgp: basesmuma (MCA v2.0, API v2.0,
>         Component
>         v1.8.4)
>                         MCA sbgp: p2p (MCA v2.0, API v2.0, Component
>         v1.8.4)
>                     MCA sharedfp: individual (MCA v2.0, API v2.0,
>         Component
>         v1.8.4)
>                     MCA sharedfp: lockedfile (MCA v2.0, API v2.0,
>         Component
>         v1.8.4)
>                     MCA sharedfp: sm (MCA v2.0, API v2.0, Component
>         v1.8.4)
>                         MCA topo: basic (MCA v2.0, API v2.1, Component
>         v1.8.4)
>                    MCA vprotocol: pessimist (MCA v2.0, API v2.0,
>         Component
>         v1.8.4)
>         
>         
>         
>         
>         _______________________________________________
>         users mailing list
>         us...@open-mpi.org
>         Subscription:
>         http://www.open-mpi.org/mailman/listinfo.cgi/users
>         Link to this post:
>         http://www.open-mpi.org/community/lists/users/2015/06/27201.php
> 
> 



Reply via email to