[OMPI users] MPI_Comm_spawn and shared memory

2015-05-14 Thread Radoslaw Martyniszyn
Dear developers of Open MPI,

I've created two applications: parent and child. Parent spawns children
using MPI_Comm_spawn. I would like to use shared memory when they
communicate. However, applications do not start when I try using sm. Please
comment on that issue. If this feature is not supported, are there any
plans to add support? Also, are there any examples showing MPI_Comm_spawn
and shared memory?

I am using Open MPI 1.6.5 on Ubuntu. Both applications are run locally on
the same host.

// Works fine
mpirun --mca btl self,tcp ./parent

// Application terminates
mpirun --mca btl self,sm ./parent

"At least one pair of MPI processes are unable to reach each other for
MPI communications.  This means that no Open MPI device has indicated
that it can be used to communicate between these processes.  This is
an error; Open MPI requires that all MPI processes be able to reach
each other.  This error can sometimes be the result of forgetting to
specify the "self" BTL."

Below are code snippets:

parent.cc:
#include 
#include 

int main(int argc, char** argv) {
  MPI_Init(NULL, NULL);

  std::string lProgram = "./child";
  MPI_Comm lIntercomm;
  int lRv;
  lRv = MPI_Comm_spawn( const_cast< char* >(lProgram.c_str()),
MPI_ARGV_NULL, 3,
   MPI_INFO_NULL, 0, MPI_COMM_WORLD, &lIntercomm,
   MPI_ERRCODES_IGNORE);

  if ( MPI_SUCCESS == lRv) {
  std::cout << "SPAWN SUCCESS" << std::endl;
  sleep(10);
  }
  else {
  std::cout << "SPAWN ERROR " << lRv << std::endl;
  }

  MPI_Finalize();
}

child.cc:
#include 
#include 
#include 

int main(int argc, char** argv) {
  // Initialize the MPI environment
  MPI_Init(NULL, NULL);

  std::cout << "CHILD" << std::endl;
  sleep(10);

  MPI_Finalize();
}

makefile (note, there are tabs not spaces preceding each target):
EXECS=child parent
MPICC?=mpic++

all: ${EXECS}

child: child.cc
${MPICC} -o child child.cc

parent: parent.cc
${MPICC} -o parent parent.cc

clean:
rm -f ${EXECS}


Greetings to all of you,
Radek Martyniszyn
#include 
#include 
#include 

int main(int argc, char** argv) {
  // Initialize the MPI environment
  MPI_Init(NULL, NULL);

  std::cout << "CHILD" << std::endl;
  sleep(10);

  MPI_Finalize();
}


makefile
Description: Binary data
#include 
#include 
#include 
#include 

int main(int argc, char** argv) {
  MPI_Init(NULL, NULL);

  std::string lProgram = "./child";
  MPI_Comm lIntercomm;
  int lRv;
  lRv = MPI_Comm_spawn( const_cast< char* >(lProgram.c_str()), MPI_ARGV_NULL, 3,
   MPI_INFO_NULL, 0, MPI_COMM_WORLD, &lIntercomm,
   MPI_ERRCODES_IGNORE);

  if ( MPI_SUCCESS == lRv) {
  std::cout << "SPAWN SUCCESS" << std::endl;
  sleep(10);
  }
  else {
  std::cout << "SPAWN ERROR " << lRv << std::endl;
  }

  MPI_Finalize();
}



Re: [OMPI users] MPI_Comm_spawn and shared memory

2015-05-14 Thread Gilles Gouaillardet

This is a known limitation of the sm btl.

FWIW, the vader btl (available in Open MPI 1.8) has the same limitation,
thought i heard there are some works in progress to get rid of this 
limitation.


Cheers,

Gilles

On 5/14/2015 3:52 PM, Radoslaw Martyniszyn wrote:

Dear developers of Open MPI,

I've created two applications: parent and child. Parent spawns 
children using MPI_Comm_spawn. I would like to use shared memory when 
they communicate. However, applications do not start when I try using 
sm. Please comment on that issue. If this feature is not supported, 
are there any plans to add support? Also, are there any examples 
showing MPI_Comm_spawn and shared memory?


I am using Open MPI 1.6.5 on Ubuntu. Both applications are run locally 
on the same host.


// Works fine
mpirun --mca btl self,tcp ./parent

// Application terminates
mpirun --mca btl self,sm ./parent

"At least one pair of MPI processes are unable to reach each other for
MPI communications.  This means that no Open MPI device has indicated
that it can be used to communicate between these processes. This is
an error; Open MPI requires that all MPI processes be able to reach
each other.  This error can sometimes be the result of forgetting to
specify the "self" BTL."

Below are code snippets:

parent.cc:
#include 
#include 

int main(int argc, char** argv) {
  MPI_Init(NULL, NULL);

  std::string lProgram = "./child";
  MPI_Comm lIntercomm;
  int lRv;
  lRv = MPI_Comm_spawn( const_cast< char* >(lProgram.c_str()), 
MPI_ARGV_NULL, 3,

   MPI_INFO_NULL, 0, MPI_COMM_WORLD, &lIntercomm,
   MPI_ERRCODES_IGNORE);

  if ( MPI_SUCCESS == lRv) {
  std::cout << "SPAWN SUCCESS" << std::endl;
  sleep(10);
  }
  else {
  std::cout << "SPAWN ERROR " << lRv << std::endl;
  }

  MPI_Finalize();
}

child.cc:
#include 
#include 
#include 

int main(int argc, char** argv) {
  // Initialize the MPI environment
  MPI_Init(NULL, NULL);

  std::cout << "CHILD" << std::endl;
  sleep(10);

  MPI_Finalize();
}

makefile (note, there are tabs not spaces preceding each target):
EXECS=child parent
MPICC?=mpic++

all: ${EXECS}

child: child.cc
${MPICC} -o child child.cc

parent: parent.cc
${MPICC} -o parent parent.cc

clean:
rm -f ${EXECS}


Greetings to all of you,
Radek Martyniszyn





___
users mailing list
us...@open-mpi.org
Subscription: http://www.open-mpi.org/mailman/listinfo.cgi/users
Link to this post: 
http://www.open-mpi.org/community/lists/users/2015/05/26865.php




Re: [OMPI users] MPI_Comm_spawn and shared memory

2015-05-14 Thread Radoslaw Martyniszyn
Hi Gilles,
Thanks for your answer.
BR,
Radek

On Thu, May 14, 2015 at 9:12 AM, Gilles Gouaillardet 
wrote:

>  This is a known limitation of the sm btl.
>
> FWIW, the vader btl (available in Open MPI 1.8) has the same limitation,
> thought i heard there are some works in progress to get rid of this
> limitation.
>
> Cheers,
>
> Gilles
>
>
> On 5/14/2015 3:52 PM, Radoslaw Martyniszyn wrote:
>
>  Dear developers of Open MPI,
>
>  I've created two applications: parent and child. Parent spawns children
> using MPI_Comm_spawn. I would like to use shared memory when they
> communicate. However, applications do not start when I try using sm. Please
> comment on that issue. If this feature is not supported, are there any
> plans to add support? Also, are there any examples showing MPI_Comm_spawn
> and shared memory?
>
> I am using Open MPI 1.6.5 on Ubuntu. Both applications are run locally on
> the same host.
>
> // Works fine
> mpirun --mca btl self,tcp ./parent
>
> // Application terminates
> mpirun --mca btl self,sm ./parent
>
> "At least one pair of MPI processes are unable to reach each other for
> MPI communications.  This means that no Open MPI device has indicated
> that it can be used to communicate between these processes.  This is
> an error; Open MPI requires that all MPI processes be able to reach
> each other.  This error can sometimes be the result of forgetting to
> specify the "self" BTL."
>
> Below are code snippets:
>
> parent.cc:
> #include 
> #include 
>
> int main(int argc, char** argv) {
>   MPI_Init(NULL, NULL);
>
>   std::string lProgram = "./child";
>   MPI_Comm lIntercomm;
>   int lRv;
>   lRv = MPI_Comm_spawn( const_cast< char* >(lProgram.c_str()),
> MPI_ARGV_NULL, 3,
>MPI_INFO_NULL, 0, MPI_COMM_WORLD, &lIntercomm,
>MPI_ERRCODES_IGNORE);
>
>   if ( MPI_SUCCESS == lRv) {
>   std::cout << "SPAWN SUCCESS" << std::endl;
>   sleep(10);
>   }
>   else {
>   std::cout << "SPAWN ERROR " << lRv << std::endl;
>   }
>
>   MPI_Finalize();
> }
>
>  child.cc:
> #include 
> #include 
> #include 
>
> int main(int argc, char** argv) {
>   // Initialize the MPI environment
>   MPI_Init(NULL, NULL);
>
>   std::cout << "CHILD" << std::endl;
>   sleep(10);
>
>   MPI_Finalize();
> }
>
>  makefile (note, there are tabs not spaces preceding each target):
>  EXECS=child parent
> MPICC?=mpic++
>
> all: ${EXECS}
>
> child: child.cc
> ${MPICC} -o child child.cc
>
> parent: parent.cc
> ${MPICC} -o parent parent.cc
>
> clean:
> rm -f ${EXECS}
>
>
>  Greetings to all of you,
>  Radek Martyniszyn
>
>
>
>
>
> ___
> users mailing listus...@open-mpi.org
> Subscription: http://www.open-mpi.org/mailman/listinfo.cgi/users
> Link to this post: 
> http://www.open-mpi.org/community/lists/users/2015/05/26865.php
>
>
>
> ___
> users mailing list
> us...@open-mpi.org
> Subscription: http://www.open-mpi.org/mailman/listinfo.cgi/users
> Link to this post:
> http://www.open-mpi.org/community/lists/users/2015/05/26866.php
>


Re: [OMPI users] OpenMPI on Windows without Cygwin

2015-05-14 Thread J Martin Rushton
You might want to have a quick look at MobaXterm ( 
http://mobaxterm.mobatek.net ).  It's quicker to deploy and startup than 
bare Cygwin (well at least in the experience of my users) but is based 
upon Cygwin.  I can't see /dev though, so you will need to test to see 
if the functionality is there.


On 13/05/15 21:19, Walt Brainerd wrote:

No, I hadn't received any response.
That is too bad.
Knowing that earlier would have saved some hours.

Some day I'll look again at extracting some set of stuff
from Cygwin that will make it work. Maybe even that
is not possible. But Cygwin is huge. OTOH, maybe anybody
who is contemplating using Coarrays would be somebody
who has Cygwin anyway.

On Wed, May 13, 2015 at 8:55 AM, Damien mailto:dam...@khubla.com>> wrote:

Walt,

I don't remember seeing a response to this.  OpenMPI isn't supported
on native Windows anymore.  The last version for Windows was the 1.6
series.

Damien


On 2015-05-11 3:07 PM, Walt Brainerd wrote:

Is it possible to build OpenMPI for Windows
not running Cygwin?

I know it uses /dev/shm, so there would have to
be something equivalent to that not in Cygwin.

TIA.

--
Walt Brainerd


___
users mailing list
us...@open-mpi.org  
Subscription:http://www.open-mpi.org/mailman/listinfo.cgi/users
Link to this 
post:http://www.open-mpi.org/community/lists/users/2015/05/26855.php



___
users mailing list
us...@open-mpi.org 
Subscription: http://www.open-mpi.org/mailman/listinfo.cgi/users
Link to this post:
http://www.open-mpi.org/community/lists/users/2015/05/26862.php




--
Walt Brainerd


___
users mailing list
us...@open-mpi.org
Subscription: http://www.open-mpi.org/mailman/listinfo.cgi/users
Link to this post: 
http://www.open-mpi.org/community/lists/users/2015/05/26863.php