With Open MPI this is the command I used:

mpirun -n 6 taskset -c 0,2,4,6,8,10 ./a.out

With intel library I set environment variable  I_MPI_PIN_MAPPING=6:0 0,1
2,2 4,3 6,4 8,5 10
and ran by saying

mpirun -n 6 ./a.out
On Fri, Feb 15, 2013 at 10:30 PM, <users-requ...@open-mpi.org> wrote:

> Send users mailing list submissions to
>         us...@open-mpi.org
>
> To subscribe or unsubscribe via the World Wide Web, visit
>         http://www.open-mpi.org/mailman/listinfo.cgi/users
> or, via email, send a message with subject or body 'help' to
>         users-requ...@open-mpi.org
>
> You can reach the person managing the list at
>         users-ow...@open-mpi.org
>
> When replying, please edit your Subject line so it is more specific
> than "Re: Contents of users digest..."
>
>
> Today's Topics:
>
>    1. Core ids not coming properly (Kranthi Kumar)
>    2. Re: Core ids not coming properly (Brice Goglin)
>
>
> ----------------------------------------------------------------------
>
> Message: 1
> Date: Fri, 15 Feb 2013 22:04:11 +0530
> From: Kranthi Kumar <kranthi...@gmail.com>
> Subject: [OMPI users] Core ids not coming properly
> To: us...@open-mpi.org
> Message-ID:
>         <CAL97QqiVvW+GKBBFPJN_bBovhnUgXKvMg0-NTYpd=1rsVsPt=
> w...@mail.gmail.com>
> Content-Type: text/plain; charset="iso-8859-1"
>
> Hello Sir
>
> Here below is the code which I wrote using hwloc for getting the bindings
> of the processes.
> I tested this code on SDSC Gordon Super Computer which has Open MPI 1.4.3
> and on TACC Stampede which uses intel's MPI library IMPI.
> With Open MPI I get all the core ids for all the processes as 0. Using
> Intel MPI library I get different coreids. I tried even binding the
> processes
> in the command line using taskset. Open MPI gives me core id 0 for all the
> processes whereas IMPI gives me correct bindings.
> Please look into this
>
>
> #include <stdio.h>
> #include <sched.h>
> #include <math.h>
> #include "mpi.h"
> #include <hwloc.h>
> int main(int argc, char* argv[])
> {
>     int rank, size;
>     cpu_set_t mask;
>     long num;
>     int proc_num(long num);
>
>     hwloc_topology_t topology;
>     hwloc_cpuset_t cpuset;
>     hwloc_obj_t obj;
>
>     MPI_Init(&argc, &argv);
>     MPI_Comm_rank(MPI_COMM_WORLD, &rank);
>     MPI_Comm_size(MPI_COMM_WORLD, &size);
>
>     hwloc_topology_init ( &topology);
>     hwloc_topology_load ( topology);
>
>     hwloc_bitmap_t set = hwloc_bitmap_alloc();
>     hwloc_obj_t pu;
>     int err;
>
>     err = hwloc_get_proc_cpubind(topology, getpid(), set,
> HWLOC_CPUBIND_PROCESS);
>     if (err) {
>     printf ("Error Cannot find\n"), exit(1);
>     }
>
>     pu = hwloc_get_pu_obj_by_os_index(topology, hwloc_bitmap_first(set));
>     printf ("Hello World, I am %d and pid: %d
> coreid:%d\n",rank,getpid(),hwloc_bitmap_first(set));
>
>     hwloc_bitmap_free(set);
>     MPI_Finalize();
>     fclose(stdout);
>     return 0;
> }
> Thank You
> --
> Kranthi
> -------------- next part --------------
> HTML attachment scrubbed and removed
>
> ------------------------------
>
> Message: 2
> Date: Fri, 15 Feb 2013 17:46:25 +0100
> From: Brice Goglin <brice.gog...@inria.fr>
> Subject: Re: [OMPI users] Core ids not coming properly
> To: Open MPI Users <us...@open-mpi.org>
> Message-ID: <511e6661.40...@inria.fr>
> Content-Type: text/plain; charset="iso-8859-1"
>
> IntelMPI binds processes by default, while OMPI doesn't. What's your
> mpiexec/mpirun command-line?
>
> Brice
>
>
>
> Le 15/02/2013 17:34, Kranthi Kumar a ?crit :
> > Hello Sir
> >
> > Here below is the code which I wrote using hwloc for getting the
> > bindings of the processes.
> > I tested this code on SDSC Gordon Super Computer which has Open MPI
> > 1.4.3 and on TACC Stampede which uses intel's MPI library IMPI.
> > With Open MPI I get all the core ids for all the processes as 0. Using
> > Intel MPI library I get different coreids. I tried even binding the
> > processes
> > in the command line using taskset. Open MPI gives me core id 0 for all
> > the processes whereas IMPI gives me correct bindings.
> > Please look into this
> >
> >
> > #include <stdio.h>
> > #include <sched.h>
> > #include <math.h>
> > #include "mpi.h"
> > #include <hwloc.h>
> > int main(int argc, char* argv[])
> > {
> >     int rank, size;
> >     cpu_set_t mask;
> >     long num;
> >     int proc_num(long num);
> >
> >     hwloc_topology_t topology;
> >     hwloc_cpuset_t cpuset;
> >     hwloc_obj_t obj;
> >
> >     MPI_Init(&argc, &argv);
> >     MPI_Comm_rank(MPI_COMM_WORLD, &rank);
> >     MPI_Comm_size(MPI_COMM_WORLD, &size);
> >
> >     hwloc_topology_init ( &topology);
> >     hwloc_topology_load ( topology);
> >
> >     hwloc_bitmap_t set = hwloc_bitmap_alloc();
> >     hwloc_obj_t pu;
> >     int err;
> >
> >     err = hwloc_get_proc_cpubind(topology, getpid(), set,
> > HWLOC_CPUBIND_PROCESS);
> >     if (err) {
> >     printf ("Error Cannot find\n"), exit(1);
> >     }
> >
> >     pu = hwloc_get_pu_obj_by_os_index(topology, hwloc_bitmap_first(set));
> >     printf ("Hello World, I am %d and pid: %d
> > coreid:%d\n",rank,getpid(),hwloc_bitmap_first(set));
> >
> >     hwloc_bitmap_free(set);
> >     MPI_Finalize();
> >     fclose(stdout);
> >     return 0;
> > }
> > Thank You
> > --
> > Kranthi
> >
> >
> > _______________________________________________
> > users mailing list
> > us...@open-mpi.org
> > http://www.open-mpi.org/mailman/listinfo.cgi/users
>
> -------------- next part --------------
> HTML attachment scrubbed and removed
>
> ------------------------------
>
> _______________________________________________
> users mailing list
> us...@open-mpi.org
> http://www.open-mpi.org/mailman/listinfo.cgi/users
>
> End of users Digest, Vol 2494, Issue 2
> **************************************
>



-- 
Kranthi

Reply via email to