Hello Fedele,

Would it be possible to build the open mpi package to use gfortran
and run the test again?

Do you observe this problem if you build a Open MP (<-> MP not MPI)
only version of the test case?

I can't reproduce this problem using gfortran.  I don't have access to an
Intel compiler at the moment.

Also, please send the output of ompi_info.

Thanks,

Howard


2015-06-25 10:37 GMT-06:00 Fedele Stabile <fedele.stab...@fis.unical.it>:

> Hello to all,
> I'm trying hybrid OpenMP + MPI programming, when I run the simple code
> listed below I have an error:
> forrtl: severe (40): recursive I/O operation, unit -1, file unknown
> Image              PC                Routine            Line
> Source
> aa                 0000000000403D8E  Unknown               Unknown
> Unknown
> aa                 0000000000403680  Unknown               Unknown
> Unknown
> libiomp5.so        00002B705F7C5BB3  Unknown               Unknown
> Unknown
> libiomp5.so        00002B705F79A617  Unknown               Unknown
> Unknown
> libiomp5.so        00002B705F799D3A  Unknown               Unknown
> Unknown
> libiomp5.so        00002B705F7C5EAD  Unknown               Unknown
> Unknown
> libpthread.so.0    00002B705FA699D1  Unknown               Unknown
> Unknown
> libc.so.6          00002B705FD688FD  Unknown               Unknown
> Unknown
> -------------------------------------------------------
> Primary job  terminated normally, but 1 process returned
> a non-zero exit code.. Per user-direction, the job has been aborted.
> -------------------------------------------------------
> --------------------------------------------------------------------------
> mpirun detected that one or more processes exited with non-zero status,
> thus causing
> the job to be terminated. The first process to do so was:
>
>   Process name: [[61634,1],0]
>   Exit code:    40
>
> I have compiled OpenMPI using this configuration options:
> ./configure --prefix=/data/apps/mpi/openmpi-1.8.4-intel
> -enable-mpirun-prefix-by-default --enable-mpi-fortran
> --enable-mpi-thread-multiple
> --with-tm=/usr/local/torque-5.1.0-1_4048f77c/src --with-verbs
> --with-openib --with-cuda=/usr/local/cuda-6.5
>
> This is the listing of the simple code:
>         program hello
>         include "mpif.h"
>
>         integer numprocs, rank, namelen, ierr
>         character*(MPI_MAX_PROCESSOR_NAME) processor_name
>         integer iam, np
>         integer omp_get_num_threads, omp_get_thread_num
>
>         call MPI_Init(ierr)
>         call MPI_Comm_size(MPI_COMM_WORLD, numprocs, ierr)
>         call MPI_Comm_rank(MPI_COMM_WORLD, rank, ierr)
>         call MPI_Get_processor_name(processor_name, namelen, ierr)
>         iam = 0
>         np = 1
> !$omp parallel default(shared) private(iam, np)
>
>                 np = omp_get_num_threads()
>                 iam = omp_get_thread_num();
>                 write(*,*)"Hello from thread ", iam," out of ", np,
>      %          " from process ", rank," out of ", numprocs,
>      %          " on ", processor_name
>
> !$omp end parallel
>         call MPI_Finalize(ierr)
>         stop
>         end
>
> Can you help me to solve the problem?
> Thank you,
> Fedele Stabile
>
>
> _______________________________________________
> users mailing list
> us...@open-mpi.org
> Subscription: http://www.open-mpi.org/mailman/listinfo.cgi/users
> Link to this post:
> http://www.open-mpi.org/community/lists/users/2015/06/27192.php
>

Reply via email to