Re: [OMPI users] Building vs packaging

2016-05-20 Thread Dave Love
dani  writes:

> I don't know about .deb packages, but at least in the rpms there is a
> post install scriptlet that re-runs ldconfig to ensure the new libs
> are in the ldconfig cache.

MPI packages following the Fedora guidelines don't do that (and rpmlint
complains bitterly as a consequence).  They rely on LD_LIBRARY_PATH via
environment modules, for better or worse:

  $ mock --shell 'rpm -q openmpi; rpm -q --scripts openmpi' 2>/dev/null
  openmpi-1.8.1-1.el6.x86_64
  $ 

[Using mock for a vanilla environment.]


Re: [OMPI users] OpenMPI 1.6.5 on CentOS 7.1, silence ib-locked-pages?

2016-05-20 Thread Dave Love
Ryan Novosielski  writes:

> I’m pretty sure this is no longer relevant (having read Roland’s
> messages about it from a couple of years ago now). Can you please
> confirm that for me, and then let me know if there is any way that I
> can silence this old copy of OpenMPI that I need to use with some
> software that depends on it for some reason? It is causing my users to
> report it as an issue pretty regularly.

Does following the FAQ not have any effect?  I don't see it would do
much harm anyway.

[For what it's worth, the warning still occurs here on a very large
memory system with the recommended settings.]


[OMPI users] problem with exceptions in Java interface

2016-05-20 Thread Siegmar Gross

Hi,

I tried MPI.ERRORS_RETURN in a small Java program with Open MPI
1.10.2 and master. I get the expected behaviour, if I use a
wrong value for the root process in "bcast". Unfortunately I
get an MPI or Java error message if I try to broadcast more data
than available. Is this intended or is it a problem in the Java
interface of Open MPI? I would be grateful if somebody can answer
my question.

loki java 194 mpijavac Exception_1_Main.java
loki java 195 mpijavac Exception_2_Main.java

loki java 196 mpiexec -np 1 java Exception_1_Main
Set error handler for MPI.COMM_WORLD to MPI.ERRORS_RETURN.
Call "bcast" with wrong "root" process.
Caught an exception.
MPI_ERR_ROOT: invalid root


loki java 197 mpiexec -np 1 java Exception_2_Main
Set error handler for MPI.COMM_WORLD to MPI.ERRORS_RETURN.
Call "bcast" with index out-of bounds.
Exception in thread "main" java.lang.ArrayIndexOutOfBoundsException
at mpi.Comm.bcast(Native Method)
at mpi.Comm.bcast(Comm.java:1231)
at Exception_2_Main.main(Exception_2_Main.java:44)
---
Primary job  terminated normally, but 1 process returned
a non-zero exit code.. Per user-direction, the job has been aborted.
---
--
mpiexec detected that one or more processes exited with non-zero status, thus 
causing
the job to be terminated. The first process to do so was:

  Process name: [[38300,1],0]
  Exit code:1
--
loki java 198


Kind regards and thank you very much for any help in advance

Siegmar
import mpi.*;

public class Exception_1_Main
{
  public static void main (String args[]) throws MPIException
  {
int mytid,  /* my task id   */
intValue[] = new int[1];/* broadcast one intValue   */

MPI.Init(args);
mytid = MPI.COMM_WORLD.getRank ();
if (mytid == 0)
{
  intValue[0] = 10; /* arbitrary value  */
}
System.out.printf ("Set error handler for MPI.COMM_WORLD to " +
   "MPI.ERRORS_RETURN.\n");
MPI.COMM_WORLD.setErrhandler (MPI.ERRORS_RETURN);
try {
  /* use wrong "root process" to produce an error   */
  System.out.printf ("Call \"bcast\" with wrong \"root\" process.\n");
  MPI.COMM_WORLD.bcast (intValue, 1, MPI.INT, 10);
}
catch (MPIException ex)
{
  System.err.printf ("Caught an exception.\n");
  System.err.printf ("%s\n", ex.getMessage ());
  MPI.Finalize ();
  System.exit (0);
}
MPI.Finalize ();
  }
}
import mpi.*;

public class Exception_2_Main
{
  public static void main (String args[]) throws MPIException
  {
int mytid,  /* my task id   */
intValue[] = new int[1];/* broadcast one intValue   */

MPI.Init(args);
mytid = MPI.COMM_WORLD.getRank ();
if (mytid == 0)
{
  intValue[0] = 10; /* arbitrary value  */
}
System.out.printf ("Set error handler for MPI.COMM_WORLD to " +
   "MPI.ERRORS_RETURN.\n");
MPI.COMM_WORLD.setErrhandler (MPI.ERRORS_RETURN);
try {
  /* use index out-of bounds to produce an error*/
  System.out.printf ("Call \"bcast\" with index out-of bounds.\n");
  MPI.COMM_WORLD.bcast (intValue, 2, MPI.INT, 0);
}
catch (MPIException ex)
{
  System.err.printf ("Caught an exception.\n");
  System.err.printf ("%s\n", ex.getMessage ());
  MPI.Finalize ();
  System.exit (0);
}
MPI.Finalize ();
  }
}


[OMPI users] [slightly off topic] hardware solutions with monetary cost in mind

2016-05-20 Thread MM
Hello,

Say I don't have access to a actual cluster, yet I'm considering cloud
compute solutions for my MPI program ultimately, but such a cost may be
highly prohibitive at the moment.
In terms of middle ground, if I am interesting in compute only, no storage,
what are possible hardware solutions out there to deploy my MPI program?
By no storage, I mean that my control linux box running the frontend of the
program, but is also part of the mpi communicator always gathers all
results and stores them locally.
At the moment, I have a second box over ethernet.

I am looking at something like Intel Compute Stick (is it possible at all
to buy a few, is linux running on them, the arch seems to be the same
x86-64, is there a possible setup with tcp for those and have openmpi over
tcp)?

Is it more cost-effective to look at extra regular linux commodity boxes?
If a no hard drive box is possible, can the executables of my MPI program
sendable over the wire before running them?

If we exclude GPU or other nonMPI solutions, and cost being a primary
factor, what is progression path from 2boxes to a cloud based solution
(amazon and the like...)

Regards,
MM


Re: [OMPI users] [slightly off topic] hardware solutions with monetary cost in mind

2016-05-20 Thread Gus Correa

On 05/20/2016 02:40 PM, MM wrote:

Hello,

Say I don't have access to a actual cluster, yet I'm considering cloud
compute solutions for my MPI program ultimately, but such a cost may be
highly prohibitive at the moment.
In terms of middle ground, if I am interesting in compute only, no
storage, what are possible hardware solutions out there to deploy my MPI
program?
By no storage, I mean that my control linux box running the frontend of
the program, but is also part of the mpi communicator always gathers all
results and stores them locally.
At the moment, I have a second box over ethernet.

I am looking at something like Intel Compute Stick (is it possible at
all to buy a few, is linux running on them, the arch seems to be the
same x86-64, is there a possible setup with tcp for those and have
openmpi over tcp)?

Is it more cost-effective to look at extra regular linux commodity boxes?
If a no hard drive box is possible, can the executables of my MPI
program sendable over the wire before running them?

If we exclude GPU or other nonMPI solutions, and cost being a primary
factor, what is progression path from 2boxes to a cloud based solution
(amazon and the like...)

Regards,
MM


___
users mailing list
us...@open-mpi.org
Subscription: https://www.open-mpi.org/mailman/listinfo.cgi/users
Link to this post: 
http://www.open-mpi.org/community/lists/users/2016/05/29257.php



1. You can run MPI programs in a single computer (multi-core 
multi-processor). So, in principle, you don't need a cluster, not even 
two machines.  If you want a proof of concept across Ethernet, two old 
desktops/laptops connected back to back (or through a cheap SOHO switch)

will do.

2. Not trying to dismiss your question, although its scope goes beyond 
MPI (and OpenMPI), and is more about HPC and clusters.

However, if you ask this question in the Beowulf mailing list,
you will get lots, tons, of advice, as the focus there is precisely
on HPC and clusters (of all sizes and for all budgets).

http://www.beowulf.org/mailman/listinfo/beowulf

I hope this helps,
Gus Correa



Re: [OMPI users] [slightly off topic] hardware solutions with monetary cost in mind

2016-05-20 Thread Damien
If you look around on Ebay, you can find old 16-core Opteron servers for 
a few hundred dollars.  It's not screaming performance, but 16 cores is 
enough to get you started on scaling and parallelism in MPI.  It's a 
cheap cluster in a box.


Damien

On 5/20/2016 12:40 PM, MM wrote:

Hello,

Say I don't have access to a actual cluster, yet I'm considering cloud 
compute solutions for my MPI program ultimately, but such a cost may 
be highly prohibitive at the moment.
In terms of middle ground, if I am interesting in compute only, no 
storage, what are possible hardware solutions out there to deploy my 
MPI program?
By no storage, I mean that my control linux box running the frontend 
of the program, but is also part of the mpi communicator always 
gathers all results and stores them locally.

At the moment, I have a second box over ethernet.

I am looking at something like Intel Compute Stick (is it possible at 
all to buy a few, is linux running on them, the arch seems to be the 
same x86-64, is there a possible setup with tcp for those and have 
openmpi over tcp)?


Is it more cost-effective to look at extra regular linux commodity boxes?
If a no hard drive box is possible, can the executables of my MPI 
program sendable over the wire before running them?


If we exclude GPU or other nonMPI solutions, and cost being a primary 
factor, what is progression path from 2boxes to a cloud based solution 
(amazon and the like...)


Regards,
MM


___
users mailing list
us...@open-mpi.org
Subscription: https://www.open-mpi.org/mailman/listinfo.cgi/users
Link to this post: 
http://www.open-mpi.org/community/lists/users/2016/05/29257.php