You can use hybrid mode.
following code works for me with ompi 1.8.2

#include <stdlib.h>
#include <stdio.h>
#include "shmem.h"
#include "mpi.h"

int main(int argc, char *argv[])
{
    MPI_Init(&argc, &argv);
    start_pes(0);

    {
        int version = 0;
        int subversion = 0;
        int num_proc = 0;
        int my_proc = 0;
        int comm_size = 0;
        int comm_rank = 0;

        MPI_Get_version(&version, &subversion);
        fprintf(stdout, "MPI version: %d.%d\n", version, subversion);

        num_proc = _num_pes();
        my_proc = _my_pe();

        fprintf(stdout, "PE#%d of %d\n", my_proc, num_proc);

        MPI_Comm_size(MPI_COMM_WORLD, &comm_size);
        MPI_Comm_rank(MPI_COMM_WORLD, &comm_rank);

        fprintf(stdout, "Comm rank#%d of %d\n", comm_rank, comm_size);
    }

    return 0;
}



On Thu, Aug 14, 2014 at 11:05 AM, Timur Ismagilov <tismagi...@mail.ru>
wrote:

> Hello!
> I use Open MPI v1.9a132520.
>
> Can I use hybrid mpi+openshmem?
> Where can i read about?
>
> I have some problems in simple programm:
>
> #include <stdio.h>
>
> #include "shmem.h"
> #include "mpi.h"
>
> int main(int argc, char* argv[])
> {
> int proc, nproc;
> int rank, size, len;
> char version[MPI_MAX_LIBRARY_VERSION_STRING];
>
> MPI_Init(&argc, &argv);
> start_pes(0);
> MPI_Finalize();
>
> return 0;
> }
>
> I compile with oshcc, with mpicc i got a compile error.
>
> 1. When i run this programm with mpirun/oshrun i got an output
>
> [1408002416.687274] [node1-130-01:26354:0] proto.c:64 MXM WARN mxm is
> destroyed but still has pending receive requests
> [1408002416.687604] [node1-130-01:26355:0] proto.c:64 MXM WARN mxm is
> destroyed but still has pending receive requests
>
> 2. If in programm, i use this code
> start_pes(0);
> MPI_Init(&argc, &argv);
> MPI_Finalize();
>
> i got an error:
>
> --------------------------------------------------------------------------
> Calling MPI_Init or MPI_Init_thread twice is erroneous.
> --------------------------------------------------------------------------
> [node1-130-01:26469] *** An error occurred in MPI_Init
> [node1-130-01:26469] *** reported by process [2397634561,140733193388033]
> [node1-130-01:26469] *** on communicator MPI_COMM_WORLD
> [node1-130-01:26469] *** MPI_ERR_OTHER: known error not in list
> [node1-130-01:26469] *** MPI_ERRORS_ARE_FATAL (processes in this
> communicator will now abort,
> [node1-130-01:26469] *** and potentially your MPI job)
> [node1-130-01:26468] [[36585,1],0] ORTE_ERROR_LOG: Not found in file
> routed_radix.c at line 395
> [node1-130-01:26469] [[36585,1],1] ORTE_ERROR_LOG: Not found in file
> routed_radix.c at line 395
> [compiler-2:02175] 1 more process has sent help message
> help-mpi-errors.txt / mpi_errors_are_fatal
> [compiler-2:02175] Set MCA parameter "orte_base_help_aggregate" to 0 to
> see all help / error messages
>
>
> --------------------------------------------------------------------------
> Calling MPI_Init or MPI_Init_thread twice is erroneous.
> --------------------------------------------------------------------------
> [node1-130-01:26469] *** An error occurred in MPI_Init
> [node1-130-01:26469] *** reported by process [2397634561,140733193388033]
> [node1-130-01:26469] *** on communicator MPI_COMM_WORLD
> [node1-130-01:26469] *** MPI_ERR_OTHER: known error not in list
> [node1-130-01:26469] *** MPI_ERRORS_ARE_FATAL (processes in this
> communicator will now abort,
> [node1-130-01:26469] ***    and potentially your MPI job)
> [node1-130-01:26468] [[36585,1],0] ORTE_ERROR_LOG: Not found in file
> routed_radix.c at line 395
> [node1-130-01:26469] [[36585,1],1] ORTE_ERROR_LOG: Not found in file
> routed_radix.c at line 395
> [compiler-2:02175] 1 more process has sent help message
> help-mpi-errors.txt / mpi_errors_are_fatal
> [compiler-2:02175] Set MCA parameter "orte_base_help_aggregate" to 0 to
> see all help / error messages
>
> _______________________________________________
> users mailing list
> us...@open-mpi.org
> Subscription: http://www.open-mpi.org/mailman/listinfo.cgi/users
> Link to this post:
> http://www.open-mpi.org/community/lists/users/2014/08/25010.php
>

Reply via email to