[OMPI users] report for openmpi-1.9a1r30290

2014-01-15 Thread Siegmar Gross
Hi,

today I installed openmpi-1.9a1r30290 on "Solaris 10 x86_64",
"Solaris 10 Sparc", and "openSUSE Linux 12.1 x86_64" with Sun C
5.12 and gcc-4.8.0 in 32- and 64-bit. I could successfully build
everything and simple tests on one machine work fine. Even
"ompi_info --all" works fine now on Solaris Sparc. Thank you
very much to everybody who solved the problems and especially
to Jeff, who found the reason for the SIGBUS error, so that
it could be solved.

"make check" reports a problem for the 32-bit version for both
"cc" and "gcc".

linpc1 openmpi-1.9a1r30290-Linux.x86_64.32_cc 115 tail -14 \
  log.make-check.Linux.x86_64.32_cc

SUPPORT: OMPI Test failed: opal_path_nfs() (1 of 22 failed)
FAIL: opal_path_nfs

1 of 2 tests failed
Please report to http://www.open-mpi.org/community/help/

make[3]: *** [check-TESTS] Error 1
make[3]: Leaving directory
  `.../openmpi-1.9a1r30290-Linux.x86_64.32_cc/test/util'
...


This problem was solved earlier for the 64-bit version.

linpc1 openmpi-1.9a1r30290-Linux.x86_64.64_cc 117 tail -14 \
  log.make-check.Linux.x86_64.64_cc
SUPPORT: OMPI Test Passed: opal_path_nfs(): (22 tests)
PASS: opal_path_nfs
==
All 2 tests passed
==
make[3]: Leaving directory
  `.../openmpi-1.9a1r30290-Linux.x86_64.64_cc/test/util'
...


Kind regards

Siegmar



[OMPI users] [ICCS/Alchemy] Last Call for Papers: Architecture, Languages, Compilation and Hardware support for Emerging ManYcore systems

2014-01-15 Thread CUDENNEC Loic


Please accept our apologies if you receive multiple copies of this CFP.

This is the last call for papers for the ICCS/Alchemy workshop on manycore 
processors.
The submission deadline is now set to January 20 (firm deadline).



**
* ALCHEMY Workshop
* Architecture, Languages, Compilation and Hardware support for Emerging 
ManYcore systems
*
* Held in conjunction with the International Conference on Computational 
Science (ICCS 2014)
* Cairns, Australia
* June 10-12, 2014
*
* http://sites.google.com/site/alchemyworkshop
**


Call for papers
*

Massively parallel processors are made of hundreds to thousands cores, 
integrated memories and a dedicated network on a single chip. They provide high 
parallel performance while drastically reducing power consumption. Manycore 
architectures are therefore expected to enter both HPC (cloud servers, 
simulation, big data..) and embedded computing (autonomous vehicles, signal 
processing, cellular networks..). In the first session of this workshop, held 
together with ICCS 2013, we presented several academic and industrial works 
that contribute to the efficient programmability of manycores. This year, we 
also focus on preliminary user feedback to see if today available manycore 
processors meet their expectations.


Topics (not limited to)
*

* Programming languages and paradigms targeting massively parallel architectures
* Advanced compilers for programming languages targeting massively parallel 
architectures
* Advanced architecture support for massive parallelism management
* Advanced architecture support for enhanced communication for CMP/manycores
* Shared memory, data consistency models and protocols
* New OS, or dedicated OS for massively parallel application
* Runtime generation for parallel programing on manycores
* User feedback on existing manycore architectures
(experiments with Adapteva Epiphany, ARM big.LITTLE, Intel Phi, Kalray MPPA, ST 
STHorm, Tilera Gx, TSAR..etc)



Important dates (subject to modifications)

Full paper submission: January 20, 2014 (firm)
Notification of acceptance: February 20, 2014
Camera-ready papers: March 5, 2014
Author registration (ICCS): February 15 - March 10, 2014
Participant early registration (ICCS): February 15 - April 25, 2014
ALCHEMY session: June 10 - 12, 2014

Check out the ICCS important dates.


Submission
**

Papers should be formatted using the ICCS rules.
Please, note that papers must not exceed ten pages in length, when typeset 
using the Procedia format.

After the conference, selected papers will be invited for a special issue of 
the Journal of Computational Science.



Program Committee
**

Frédéric BONIOL, ONERA, France
Aparna CHANDRAMOWLISHWARAN, MIT, USA
Loïc CUDENNEC, CEA, LIST, France
Stephan DIESTELHORST, ARM Cambridge, UK
Aleksandar DRAGOJEVIC, Microsoft Research Cambridge, UK
José FLICH CARDO, Universidad Politécnica de Valencia, Spain
Guy GOGNIAT, Université de Bretagne Sud, France
Bernard GOOSSENS, Université de Perpignan, France
Vincent GRAMOLI, NICTA / University of Sydney, Australia
Jorn W. JANNECK, Lund University, Sweden
Michihiro KOIBUCHI, National Institute of Informatics, Japan
Stéphane LOUISE, CEA, LIST, France
Vania MARANGOZOVA-MARTIN, Université Joseph-Fourier Grenoble, France
Marco MATTAVELLI, École Polytechnique Fédérale de Lausanne (EPFL), Switzerland
Onur MUTLU, Carnegie Mellon University, USA
Eric PETIT, Université de Versailles Saint Quentin-en-Yvelines, France
Erwan PIRIOU, CEA, LIST, France
Antoniu POP, University of Manchester, UK
Erwan RAFFIN, CAPS entreprise, France
Mickaël RAULET, IETR / INSA de Rennes, France
Etienne RIVIERE, University of Neuchâtel, Switzerland
Thomas ROPARS, École Polytechnique Fédérale de Lausanne (EPFL), Switzerland
Osamu TATEBE, AIST / University of Tsukuba, Japan
Philippe THIERRY, Intel Corporation, France



Organizers
*
Loïc CUDENNEC, CEA, LIST, France
Stéphane LOUISE, CEA, LIST, France

http://www.cea.fr/english_portal


--
Loïc CUDENNEC
http://www.cudennec.fr/
CEA, LIST, Nano-Innov / Saclay
91191 Gif-sur-Yvette cedex
+33 1 69 08 00 58



[OMPI users] Can't get a fully functional openmpi build on Mac OSX Mavericks

2014-01-15 Thread Ronald Cohen
I have been struggling trying to get a usable build of openmpi on Mac OSX
Mavericks (10.9.1).  I can get openmpi to configure and build without
error, but have problems after that which depend on the openmpi version.

With 1.6.5, make check fails the opal_datatype_test, ddt_test, and ddt_raw
tests.  The various atomic_* tests pass.See checklogs_1.6.5, attached
as a .gz file.

Following suggestions from openmpi discussions I tried openmpi version
1.7.4rc1.  In this case make check indicates all tests passed.  But when I
proceeded to try to build a parallel code (parallel HDF5) it failed.
Following an email exchange with the HDF5 support people, they suggested I
try to compile and run the attached bit of simple code Sample_mpio.c (which
they supplied) which does not use any HDF5, but just attempts a parallel
write to a file and parallel read.   That test failed when requesting more
than 1 processor -- which they say indicates a failure of the openmpi
installation.   The error message was:

MPI_INIT: argc 1
MPI_INIT: argc 1
Testing simple C MPIO program with 2 processes accessing file ./mpitest.data
(Filename can be specified via program argument)
Proc 0: hostname=Ron-Cohen-MBP.local
Proc 1: hostname=Ron-Cohen-MBP.local
MPI_BARRIER[0]: comm MPI_COMM_WORLD
MPI_BARRIER[1]: comm MPI_COMM_WORLD
Proc 0: MPI_File_open with MPI_MODE_EXCL failed (MPI_ERR_FILE: invalid file)
MPI_ABORT[0]: comm MPI_COMM_WORLD errorcode 1
MPI_BCAST[1]: buffer 7fff5a483048 count 1 datatype MPI_INT root 0 comm
MPI_COMM_WORLD

I then went back to my openmpi directories and tried running some of the
individual tests in the test and examples directories.  In particular in
test/class I found one test that seem to not be run as part of make check
which failed, even with one processor; this is opal_bitmap.  Not sure if
this is because 1.7.4rc1 is incomplete, or there is something wrong with
the installation, or maybe a 32 vs 64 bit thing?   The error message is

mpirun detected that one or more processes exited with non-zero status,
thus causing the job to be terminated. The first process to do so was:

  Process name: [[48805,1],0]
  Exit code:255

Any suggestions?

More generally has anyone out there gotten an openmpi build on Mavericks to
work with sufficient success that they can get the attached Sample_mpio.c
(or better yet, parallel HDF5) to build?

Details: Running Mac OS X 10.9.1 on a mid-2009 Macbook pro with 4 GB
memory; tried openmpi 1.6.5 and 1.7.4rc1.  Built openmpi against the stock
gcc that comes with XCode 5.0.2, and gfortran 4.9.0.

Files attached: config.log.gz, openmpialllog.gz (output of running
ompi_info --all), checklog2.gz (output of make.check in top openmpi
directory).

I am not attaching logs of make and install since those seem to have been
successful, but can generate those if that would be helpful.


ompiinfoalllog.gz
Description: GNU Zip compressed data


checklog2.gz
Description: GNU Zip compressed data


config.log.gz
Description: GNU Zip compressed data


Re: [OMPI users] Can't get a fully functional openmpi build on Mac OSX Mavericks

2014-01-15 Thread Ralph Castain
I regularly build on Mavericks and run without problem, though I haven't
tried a parallel IO app. I'll give yours a try later, when I get back to my
Mac.



On Wed, Jan 15, 2014 at 10:04 AM, Ronald Cohen  wrote:

> I have been struggling trying to get a usable build of openmpi on Mac OSX
> Mavericks (10.9.1).  I can get openmpi to configure and build without
> error, but have problems after that which depend on the openmpi version.
>
> With 1.6.5, make check fails the opal_datatype_test, ddt_test, and ddt_raw
> tests.  The various atomic_* tests pass.See checklogs_1.6.5, attached
> as a .gz file.
>
> Following suggestions from openmpi discussions I tried openmpi version
> 1.7.4rc1.  In this case make check indicates all tests passed.  But when I
> proceeded to try to build a parallel code (parallel HDF5) it failed.
> Following an email exchange with the HDF5 support people, they suggested I
> try to compile and run the attached bit of simple code Sample_mpio.c (which
> they supplied) which does not use any HDF5, but just attempts a parallel
> write to a file and parallel read.   That test failed when requesting more
> than 1 processor -- which they say indicates a failure of the openmpi
> installation.   The error message was:
>
> MPI_INIT: argc 1
> MPI_INIT: argc 1
> Testing simple C MPIO program with 2 processes accessing file
> ./mpitest.data
> (Filename can be specified via program argument)
> Proc 0: hostname=Ron-Cohen-MBP.local
> Proc 1: hostname=Ron-Cohen-MBP.local
> MPI_BARRIER[0]: comm MPI_COMM_WORLD
> MPI_BARRIER[1]: comm MPI_COMM_WORLD
> Proc 0: MPI_File_open with MPI_MODE_EXCL failed (MPI_ERR_FILE: invalid
> file)
> MPI_ABORT[0]: comm MPI_COMM_WORLD errorcode 1
> MPI_BCAST[1]: buffer 7fff5a483048 count 1 datatype MPI_INT root 0 comm
> MPI_COMM_WORLD
>
> I then went back to my openmpi directories and tried running some of the
> individual tests in the test and examples directories.  In particular in
> test/class I found one test that seem to not be run as part of make check
> which failed, even with one processor; this is opal_bitmap.  Not sure if
> this is because 1.7.4rc1 is incomplete, or there is something wrong with
> the installation, or maybe a 32 vs 64 bit thing?   The error message is
>
> mpirun detected that one or more processes exited with non-zero status,
> thus causing the job to be terminated. The first process to do so was:
>
>   Process name: [[48805,1],0]
>   Exit code:255
>
> Any suggestions?
>
> More generally has anyone out there gotten an openmpi build on Mavericks
> to work with sufficient success that they can get the attached
> Sample_mpio.c (or better yet, parallel HDF5) to build?
>
> Details: Running Mac OS X 10.9.1 on a mid-2009 Macbook pro with 4 GB
> memory; tried openmpi 1.6.5 and 1.7.4rc1.  Built openmpi against the stock
> gcc that comes with XCode 5.0.2, and gfortran 4.9.0.
>
> Files attached: config.log.gz, openmpialllog.gz (output of running
> ompi_info --all), checklog2.gz (output of make.check in top openmpi
> directory).
>
> I am not attaching logs of make and install since those seem to have been
> successful, but can generate those if that would be helpful.
>
> ___
> users mailing list
> us...@open-mpi.org
> http://www.open-mpi.org/mailman/listinfo.cgi/users
>


Re: [OMPI users] Can't get a fully functional openmpi build on Mac OSX Mavericks

2014-01-15 Thread Ralph Castain
BTW: could you send me your sample test code?


On Wed, Jan 15, 2014 at 10:34 AM, Ralph Castain  wrote:

> I regularly build on Mavericks and run without problem, though I haven't
> tried a parallel IO app. I'll give yours a try later, when I get back to my
> Mac.
>
>
>
> On Wed, Jan 15, 2014 at 10:04 AM, Ronald Cohen  wrote:
>
>> I have been struggling trying to get a usable build of openmpi on Mac OSX
>> Mavericks (10.9.1).  I can get openmpi to configure and build without
>> error, but have problems after that which depend on the openmpi version.
>>
>> With 1.6.5, make check fails the opal_datatype_test, ddt_test, and
>> ddt_raw tests.  The various atomic_* tests pass.See checklogs_1.6.5,
>> attached as a .gz file.
>>
>> Following suggestions from openmpi discussions I tried openmpi version
>> 1.7.4rc1.  In this case make check indicates all tests passed.  But when I
>> proceeded to try to build a parallel code (parallel HDF5) it failed.
>> Following an email exchange with the HDF5 support people, they suggested I
>> try to compile and run the attached bit of simple code Sample_mpio.c (which
>> they supplied) which does not use any HDF5, but just attempts a parallel
>> write to a file and parallel read.   That test failed when requesting more
>> than 1 processor -- which they say indicates a failure of the openmpi
>> installation.   The error message was:
>>
>> MPI_INIT: argc 1
>> MPI_INIT: argc 1
>> Testing simple C MPIO program with 2 processes accessing file
>> ./mpitest.data
>> (Filename can be specified via program argument)
>> Proc 0: hostname=Ron-Cohen-MBP.local
>> Proc 1: hostname=Ron-Cohen-MBP.local
>> MPI_BARRIER[0]: comm MPI_COMM_WORLD
>> MPI_BARRIER[1]: comm MPI_COMM_WORLD
>> Proc 0: MPI_File_open with MPI_MODE_EXCL failed (MPI_ERR_FILE: invalid
>> file)
>> MPI_ABORT[0]: comm MPI_COMM_WORLD errorcode 1
>> MPI_BCAST[1]: buffer 7fff5a483048 count 1 datatype MPI_INT root 0 comm
>> MPI_COMM_WORLD
>>
>> I then went back to my openmpi directories and tried running some of the
>> individual tests in the test and examples directories.  In particular in
>> test/class I found one test that seem to not be run as part of make check
>> which failed, even with one processor; this is opal_bitmap.  Not sure if
>> this is because 1.7.4rc1 is incomplete, or there is something wrong with
>> the installation, or maybe a 32 vs 64 bit thing?   The error message is
>>
>> mpirun detected that one or more processes exited with non-zero status,
>> thus causing the job to be terminated. The first process to do so was:
>>
>>   Process name: [[48805,1],0]
>>   Exit code:255
>>
>> Any suggestions?
>>
>> More generally has anyone out there gotten an openmpi build on Mavericks
>> to work with sufficient success that they can get the attached
>> Sample_mpio.c (or better yet, parallel HDF5) to build?
>>
>> Details: Running Mac OS X 10.9.1 on a mid-2009 Macbook pro with 4 GB
>> memory; tried openmpi 1.6.5 and 1.7.4rc1.  Built openmpi against the stock
>> gcc that comes with XCode 5.0.2, and gfortran 4.9.0.
>>
>> Files attached: config.log.gz, openmpialllog.gz (output of running
>> ompi_info --all), checklog2.gz (output of make.check in top openmpi
>> directory).
>>
>> I am not attaching logs of make and install since those seem to have been
>> successful, but can generate those if that would be helpful.
>>
>> ___
>> users mailing list
>> us...@open-mpi.org
>> http://www.open-mpi.org/mailman/listinfo.cgi/users
>>
>
>


Re: [OMPI users] Can't get a fully functional openmpi build on Mac OSX Mavericks

2014-01-15 Thread Ronald Cohen
I neglected in my earlier post to attach the small C code that the hdf5
folks supplied; it is attached here.




On Wed, Jan 15, 2014 at 10:04 AM, Ronald Cohen  wrote:

> I have been struggling trying to get a usable build of openmpi on Mac OSX
> Mavericks (10.9.1).  I can get openmpi to configure and build without
> error, but have problems after that which depend on the openmpi version.
>
> With 1.6.5, make check fails the opal_datatype_test, ddt_test, and ddt_raw
> tests.  The various atomic_* tests pass.See checklogs_1.6.5, attached
> as a .gz file.
>
> Following suggestions from openmpi discussions I tried openmpi version
> 1.7.4rc1.  In this case make check indicates all tests passed.  But when I
> proceeded to try to build a parallel code (parallel HDF5) it failed.
> Following an email exchange with the HDF5 support people, they suggested I
> try to compile and run the attached bit of simple code Sample_mpio.c (which
> they supplied) which does not use any HDF5, but just attempts a parallel
> write to a file and parallel read.   That test failed when requesting more
> than 1 processor -- which they say indicates a failure of the openmpi
> installation.   The error message was:
>
> MPI_INIT: argc 1
> MPI_INIT: argc 1
> Testing simple C MPIO program with 2 processes accessing file
> ./mpitest.data
> (Filename can be specified via program argument)
> Proc 0: hostname=Ron-Cohen-MBP.local
> Proc 1: hostname=Ron-Cohen-MBP.local
> MPI_BARRIER[0]: comm MPI_COMM_WORLD
> MPI_BARRIER[1]: comm MPI_COMM_WORLD
> Proc 0: MPI_File_open with MPI_MODE_EXCL failed (MPI_ERR_FILE: invalid
> file)
> MPI_ABORT[0]: comm MPI_COMM_WORLD errorcode 1
> MPI_BCAST[1]: buffer 7fff5a483048 count 1 datatype MPI_INT root 0 comm
> MPI_COMM_WORLD
>
> I then went back to my openmpi directories and tried running some of the
> individual tests in the test and examples directories.  In particular in
> test/class I found one test that seem to not be run as part of make check
> which failed, even with one processor; this is opal_bitmap.  Not sure if
> this is because 1.7.4rc1 is incomplete, or there is something wrong with
> the installation, or maybe a 32 vs 64 bit thing?   The error message is
>
> mpirun detected that one or more processes exited with non-zero status,
> thus causing the job to be terminated. The first process to do so was:
>
>   Process name: [[48805,1],0]
>   Exit code:255
>
> Any suggestions?
>
> More generally has anyone out there gotten an openmpi build on Mavericks
> to work with sufficient success that they can get the attached
> Sample_mpio.c (or better yet, parallel HDF5) to build?
>
> Details: Running Mac OS X 10.9.1 on a mid-2009 Macbook pro with 4 GB
> memory; tried openmpi 1.6.5 and 1.7.4rc1.  Built openmpi against the stock
> gcc that comes with XCode 5.0.2, and gfortran 4.9.0.
>
> Files attached: config.log.gz, openmpialllog.gz (output of running
> ompi_info --all), checklog2.gz (output of make.check in top openmpi
> directory).
>
> I am not attaching logs of make and install since those seem to have been
> successful, but can generate those if that would be helpful.
>
/* * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * *
 * Copyright by The HDF Group.   *
 * Copyright by the Board of Trustees of the University of Illinois. *
 * All rights reserved.  *
 *   *
 * This file is part of HDF5.  The full HDF5 copyright notice, including *
 * terms governing use, modification, and redistribution, is contained in*
 * the files COPYING and Copyright.html.  COPYING can be found at the root   *
 * of the source code distribution tree; Copyright.html can be found at the  *
 * root level of an installed copy of the electronic HDF5 document set and   *
 * is linked from the top-level documents page.  It can also be found at *
 * http://hdfgroup.org/HDF5/doc/Copyright.html.  If you do not have  *
 * access to either file, you may request a copy from h...@hdfgroup.org. *
 * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * * */
/* Simple MPI-IO program testing if a parallel file can be created.
 * Default filename can be specified via first program argument.
 * Each process writes something, then reads all data back.
 */

#include 
#include 
#ifndef MPI_FILE_NULL   /*MPIO may be defined in mpi.h already   */
#   include 
#endif

#define DIMSIZE	10		/* dimension size, avoid powers of 2. */
#define PRINTID printf("Proc %d: ", mpi_rank)

main(int ac, char **av)
{
char hostname[128];
int  mpi_size, mpi_rank;
MPI_File fh;
char *filename = "./mpitest.data";
char mpi_err_str[MPI_MAX_ERROR_STRING];
int  mpi_err_strlen;
int  mpi_err;
char writedata[DIMSIZE], readdata[DIMSIZE];
char expect_val;
int  i, irank; 
int  nerrors =

Re: [OMPI users] Can't get a fully functional openmpi build on Mac OSX Mavericks

2014-01-15 Thread Ronald Cohen
Ralph,

I just sent out another post with the c file attached.

If you can get that to work, and even if you can't can you tell me what
configure options you use, and what version of open-mpi?   Thanks.

Ron


On Wed, Jan 15, 2014 at 10:36 AM, Ralph Castain  wrote:

> BTW: could you send me your sample test code?
>
>
> On Wed, Jan 15, 2014 at 10:34 AM, Ralph Castain  wrote:
>
>> I regularly build on Mavericks and run without problem, though I haven't
>> tried a parallel IO app. I'll give yours a try later, when I get back to my
>> Mac.
>>
>>
>>
>> On Wed, Jan 15, 2014 at 10:04 AM, Ronald Cohen  wrote:
>>
>>> I have been struggling trying to get a usable build of openmpi on Mac
>>> OSX Mavericks (10.9.1).  I can get openmpi to configure and build without
>>> error, but have problems after that which depend on the openmpi version.
>>>
>>> With 1.6.5, make check fails the opal_datatype_test, ddt_test, and
>>> ddt_raw tests.  The various atomic_* tests pass.See checklogs_1.6.5,
>>> attached as a .gz file.
>>>
>>> Following suggestions from openmpi discussions I tried openmpi version
>>> 1.7.4rc1.  In this case make check indicates all tests passed.  But when I
>>> proceeded to try to build a parallel code (parallel HDF5) it failed.
>>> Following an email exchange with the HDF5 support people, they suggested I
>>> try to compile and run the attached bit of simple code Sample_mpio.c (which
>>> they supplied) which does not use any HDF5, but just attempts a parallel
>>> write to a file and parallel read.   That test failed when requesting more
>>> than 1 processor -- which they say indicates a failure of the openmpi
>>> installation.   The error message was:
>>>
>>> MPI_INIT: argc 1
>>> MPI_INIT: argc 1
>>> Testing simple C MPIO program with 2 processes accessing file
>>> ./mpitest.data
>>> (Filename can be specified via program argument)
>>> Proc 0: hostname=Ron-Cohen-MBP.local
>>> Proc 1: hostname=Ron-Cohen-MBP.local
>>> MPI_BARRIER[0]: comm MPI_COMM_WORLD
>>> MPI_BARRIER[1]: comm MPI_COMM_WORLD
>>> Proc 0: MPI_File_open with MPI_MODE_EXCL failed (MPI_ERR_FILE: invalid
>>> file)
>>> MPI_ABORT[0]: comm MPI_COMM_WORLD errorcode 1
>>> MPI_BCAST[1]: buffer 7fff5a483048 count 1 datatype MPI_INT root 0 comm
>>> MPI_COMM_WORLD
>>>
>>> I then went back to my openmpi directories and tried running some of the
>>> individual tests in the test and examples directories.  In particular in
>>> test/class I found one test that seem to not be run as part of make check
>>> which failed, even with one processor; this is opal_bitmap.  Not sure if
>>> this is because 1.7.4rc1 is incomplete, or there is something wrong with
>>> the installation, or maybe a 32 vs 64 bit thing?   The error message is
>>>
>>> mpirun detected that one or more processes exited with non-zero status,
>>> thus causing the job to be terminated. The first process to do so was:
>>>
>>>   Process name: [[48805,1],0]
>>>   Exit code:255
>>>
>>> Any suggestions?
>>>
>>> More generally has anyone out there gotten an openmpi build on Mavericks
>>> to work with sufficient success that they can get the attached
>>> Sample_mpio.c (or better yet, parallel HDF5) to build?
>>>
>>> Details: Running Mac OS X 10.9.1 on a mid-2009 Macbook pro with 4 GB
>>> memory; tried openmpi 1.6.5 and 1.7.4rc1.  Built openmpi against the stock
>>> gcc that comes with XCode 5.0.2, and gfortran 4.9.0.
>>>
>>> Files attached: config.log.gz, openmpialllog.gz (output of running
>>> ompi_info --all), checklog2.gz (output of make.check in top openmpi
>>> directory).
>>>
>>> I am not attaching logs of make and install since those seem to have
>>> been successful, but can generate those if that would be helpful.
>>>
>>> ___
>>> users mailing list
>>> us...@open-mpi.org
>>> http://www.open-mpi.org/mailman/listinfo.cgi/users
>>>
>>
>>
>
> ___
> users mailing list
> us...@open-mpi.org
> http://www.open-mpi.org/mailman/listinfo.cgi/users
>


Re: [OMPI users] Can't get a fully functional openmpi build on Mac OSX Mavericks

2014-01-15 Thread Ralph Castain
You can find my configure options in the OMPI distribution at
contrib/platform/intel/bend/mac. You are welcome to use them - just
configure --with-platform=intel/bend/mac

I work on the developer's trunk, of course, but also run the head of the
1.7.4 branch (essentially the nightly tarball) on a fairly regular basis.

As for the opal_bitmap test: it wouldn't surprise me if that one was stale.
I can check on it later tonight, but I'd suspect that the test is bad as we
use that class in the code base and haven't seen an issue.



On Wed, Jan 15, 2014 at 10:49 AM, Ronald Cohen  wrote:

> Ralph,
>
> I just sent out another post with the c file attached.
>
> If you can get that to work, and even if you can't can you tell me what
> configure options you use, and what version of open-mpi?   Thanks.
>
> Ron
>
>
> On Wed, Jan 15, 2014 at 10:36 AM, Ralph Castain  wrote:
>
>> BTW: could you send me your sample test code?
>>
>>
>> On Wed, Jan 15, 2014 at 10:34 AM, Ralph Castain  wrote:
>>
>>> I regularly build on Mavericks and run without problem, though I haven't
>>> tried a parallel IO app. I'll give yours a try later, when I get back to my
>>> Mac.
>>>
>>>
>>>
>>> On Wed, Jan 15, 2014 at 10:04 AM, Ronald Cohen  wrote:
>>>
 I have been struggling trying to get a usable build of openmpi on Mac
 OSX Mavericks (10.9.1).  I can get openmpi to configure and build without
 error, but have problems after that which depend on the openmpi version.

 With 1.6.5, make check fails the opal_datatype_test, ddt_test, and
 ddt_raw tests.  The various atomic_* tests pass.See checklogs_1.6.5,
 attached as a .gz file.

 Following suggestions from openmpi discussions I tried openmpi version
 1.7.4rc1.  In this case make check indicates all tests passed.  But when I
 proceeded to try to build a parallel code (parallel HDF5) it failed.
 Following an email exchange with the HDF5 support people, they suggested I
 try to compile and run the attached bit of simple code Sample_mpio.c (which
 they supplied) which does not use any HDF5, but just attempts a parallel
 write to a file and parallel read.   That test failed when requesting more
 than 1 processor -- which they say indicates a failure of the openmpi
 installation.   The error message was:

 MPI_INIT: argc 1
 MPI_INIT: argc 1
 Testing simple C MPIO program with 2 processes accessing file
 ./mpitest.data
 (Filename can be specified via program argument)
 Proc 0: hostname=Ron-Cohen-MBP.local
 Proc 1: hostname=Ron-Cohen-MBP.local
 MPI_BARRIER[0]: comm MPI_COMM_WORLD
 MPI_BARRIER[1]: comm MPI_COMM_WORLD
 Proc 0: MPI_File_open with MPI_MODE_EXCL failed (MPI_ERR_FILE: invalid
 file)
 MPI_ABORT[0]: comm MPI_COMM_WORLD errorcode 1
 MPI_BCAST[1]: buffer 7fff5a483048 count 1 datatype MPI_INT root 0 comm
 MPI_COMM_WORLD

 I then went back to my openmpi directories and tried running some of
 the individual tests in the test and examples directories.  In particular
 in test/class I found one test that seem to not be run as part of make
 check which failed, even with one processor; this is opal_bitmap.  Not sure
 if this is because 1.7.4rc1 is incomplete, or there is something wrong with
 the installation, or maybe a 32 vs 64 bit thing?   The error message is

 mpirun detected that one or more processes exited with non-zero status,
 thus causing the job to be terminated. The first process to do so was:

   Process name: [[48805,1],0]
   Exit code:255

 Any suggestions?

 More generally has anyone out there gotten an openmpi build on
 Mavericks to work with sufficient success that they can get the attached
 Sample_mpio.c (or better yet, parallel HDF5) to build?

 Details: Running Mac OS X 10.9.1 on a mid-2009 Macbook pro with 4 GB
 memory; tried openmpi 1.6.5 and 1.7.4rc1.  Built openmpi against the stock
 gcc that comes with XCode 5.0.2, and gfortran 4.9.0.

 Files attached: config.log.gz, openmpialllog.gz (output of running
 ompi_info --all), checklog2.gz (output of make.check in top openmpi
 directory).

 I am not attaching logs of make and install since those seem to have
 been successful, but can generate those if that would be helpful.

 ___
 users mailing list
 us...@open-mpi.org
 http://www.open-mpi.org/mailman/listinfo.cgi/users

>>>
>>>
>>
>> ___
>> users mailing list
>> us...@open-mpi.org
>> http://www.open-mpi.org/mailman/listinfo.cgi/users
>>
>
>
> ___
> users mailing list
> us...@open-mpi.org
> http://www.open-mpi.org/mailman/listinfo.cgi/users
>


Re: [OMPI users] Can't get a fully functional openmpi build on Mac OSX Mavericks

2014-01-15 Thread Ralph Castain
Oh, a word of caution on those config params - you might need to check to
ensure I don't disable romio in them. I don't normally build it as I don't
use it. Since that is what you are trying to use, just change the "no" to
"yes" (or delete that line altogether) and it will build.



On Wed, Jan 15, 2014 at 10:53 AM, Ralph Castain  wrote:

> You can find my configure options in the OMPI distribution at
> contrib/platform/intel/bend/mac. You are welcome to use them - just
> configure --with-platform=intel/bend/mac
>
> I work on the developer's trunk, of course, but also run the head of the
> 1.7.4 branch (essentially the nightly tarball) on a fairly regular basis.
>
> As for the opal_bitmap test: it wouldn't surprise me if that one was
> stale. I can check on it later tonight, but I'd suspect that the test is
> bad as we use that class in the code base and haven't seen an issue.
>
>
>
> On Wed, Jan 15, 2014 at 10:49 AM, Ronald Cohen  wrote:
>
>> Ralph,
>>
>> I just sent out another post with the c file attached.
>>
>> If you can get that to work, and even if you can't can you tell me what
>> configure options you use, and what version of open-mpi?   Thanks.
>>
>> Ron
>>
>>
>> On Wed, Jan 15, 2014 at 10:36 AM, Ralph Castain  wrote:
>>
>>> BTW: could you send me your sample test code?
>>>
>>>
>>> On Wed, Jan 15, 2014 at 10:34 AM, Ralph Castain wrote:
>>>
 I regularly build on Mavericks and run without problem, though I
 haven't tried a parallel IO app. I'll give yours a try later, when I get
 back to my Mac.



 On Wed, Jan 15, 2014 at 10:04 AM, Ronald Cohen  wrote:

> I have been struggling trying to get a usable build of openmpi on Mac
> OSX Mavericks (10.9.1).  I can get openmpi to configure and build without
> error, but have problems after that which depend on the openmpi version.
>
> With 1.6.5, make check fails the opal_datatype_test, ddt_test, and
> ddt_raw tests.  The various atomic_* tests pass.See checklogs_1.6.5,
> attached as a .gz file.
>
> Following suggestions from openmpi discussions I tried openmpi version
> 1.7.4rc1.  In this case make check indicates all tests passed.  But when I
> proceeded to try to build a parallel code (parallel HDF5) it failed.
> Following an email exchange with the HDF5 support people, they suggested I
> try to compile and run the attached bit of simple code Sample_mpio.c 
> (which
> they supplied) which does not use any HDF5, but just attempts a parallel
> write to a file and parallel read.   That test failed when requesting more
> than 1 processor -- which they say indicates a failure of the openmpi
> installation.   The error message was:
>
> MPI_INIT: argc 1
> MPI_INIT: argc 1
> Testing simple C MPIO program with 2 processes accessing file
> ./mpitest.data
> (Filename can be specified via program argument)
> Proc 0: hostname=Ron-Cohen-MBP.local
> Proc 1: hostname=Ron-Cohen-MBP.local
> MPI_BARRIER[0]: comm MPI_COMM_WORLD
> MPI_BARRIER[1]: comm MPI_COMM_WORLD
> Proc 0: MPI_File_open with MPI_MODE_EXCL failed (MPI_ERR_FILE: invalid
> file)
> MPI_ABORT[0]: comm MPI_COMM_WORLD errorcode 1
> MPI_BCAST[1]: buffer 7fff5a483048 count 1 datatype MPI_INT root 0 comm
> MPI_COMM_WORLD
>
> I then went back to my openmpi directories and tried running some of
> the individual tests in the test and examples directories.  In particular
> in test/class I found one test that seem to not be run as part of make
> check which failed, even with one processor; this is opal_bitmap.  Not 
> sure
> if this is because 1.7.4rc1 is incomplete, or there is something wrong 
> with
> the installation, or maybe a 32 vs 64 bit thing?   The error message is
>
> mpirun detected that one or more processes exited with non-zero
> status, thus causing the job to be terminated. The first process to do so
> was:
>
>   Process name: [[48805,1],0]
>   Exit code:255
>
> Any suggestions?
>
> More generally has anyone out there gotten an openmpi build on
> Mavericks to work with sufficient success that they can get the attached
> Sample_mpio.c (or better yet, parallel HDF5) to build?
>
> Details: Running Mac OS X 10.9.1 on a mid-2009 Macbook pro with 4 GB
> memory; tried openmpi 1.6.5 and 1.7.4rc1.  Built openmpi against the stock
> gcc that comes with XCode 5.0.2, and gfortran 4.9.0.
>
> Files attached: config.log.gz, openmpialllog.gz (output of running
> ompi_info --all), checklog2.gz (output of make.check in top openmpi
> directory).
>
> I am not attaching logs of make and install since those seem to have
> been successful, but can generate those if that would be helpful.
>
> ___
> users mailing list
> us...@open-mpi.org
> http://www.open-mpi.o

Re: [OMPI users] Can't get a fully functional openmpi build on Mac OSX Mavericks

2014-01-15 Thread Ronald Cohen
Aha.   I guess I didn't know what the io-romio option does.   If you look
at my config.log you will see my configure line included
--disable-io-romio.Guess I should change --disable to --enable.

You seem to imply that the nightly build is stable enough that I should
probably switch to that rather than 1.7.4rc1.   Am I reading between the
lines correctly?



On Wed, Jan 15, 2014 at 10:56 AM, Ralph Castain  wrote:

> Oh, a word of caution on those config params - you might need to check to
> ensure I don't disable romio in them. I don't normally build it as I don't
> use it. Since that is what you are trying to use, just change the "no" to
> "yes" (or delete that line altogether) and it will build.
>
>
>
> On Wed, Jan 15, 2014 at 10:53 AM, Ralph Castain  wrote:
>
>> You can find my configure options in the OMPI distribution at
>> contrib/platform/intel/bend/mac. You are welcome to use them - just
>> configure --with-platform=intel/bend/mac
>>
>> I work on the developer's trunk, of course, but also run the head of the
>> 1.7.4 branch (essentially the nightly tarball) on a fairly regular basis.
>>
>> As for the opal_bitmap test: it wouldn't surprise me if that one was
>> stale. I can check on it later tonight, but I'd suspect that the test is
>> bad as we use that class in the code base and haven't seen an issue.
>>
>>
>>
>> On Wed, Jan 15, 2014 at 10:49 AM, Ronald Cohen  wrote:
>>
>>> Ralph,
>>>
>>> I just sent out another post with the c file attached.
>>>
>>> If you can get that to work, and even if you can't can you tell me what
>>> configure options you use, and what version of open-mpi?   Thanks.
>>>
>>> Ron
>>>
>>>
>>> On Wed, Jan 15, 2014 at 10:36 AM, Ralph Castain wrote:
>>>
 BTW: could you send me your sample test code?


 On Wed, Jan 15, 2014 at 10:34 AM, Ralph Castain wrote:

> I regularly build on Mavericks and run without problem, though I
> haven't tried a parallel IO app. I'll give yours a try later, when I get
> back to my Mac.
>
>
>
> On Wed, Jan 15, 2014 at 10:04 AM, Ronald Cohen wrote:
>
>> I have been struggling trying to get a usable build of openmpi on Mac
>> OSX Mavericks (10.9.1).  I can get openmpi to configure and build without
>> error, but have problems after that which depend on the openmpi version.
>>
>> With 1.6.5, make check fails the opal_datatype_test, ddt_test, and
>> ddt_raw tests.  The various atomic_* tests pass.See checklogs_1.6.5,
>> attached as a .gz file.
>>
>> Following suggestions from openmpi discussions I tried openmpi
>> version 1.7.4rc1.  In this case make check indicates all tests passed.  
>> But
>> when I proceeded to try to build a parallel code (parallel HDF5) it
>> failed.  Following an email exchange with the HDF5 support people, they
>> suggested I try to compile and run the attached bit of simple code
>> Sample_mpio.c (which they supplied) which does not use any HDF5, but just
>> attempts a parallel write to a file and parallel read.   That test failed
>> when requesting more than 1 processor -- which they say indicates a 
>> failure
>> of the openmpi installation.   The error message was:
>>
>> MPI_INIT: argc 1
>> MPI_INIT: argc 1
>> Testing simple C MPIO program with 2 processes accessing file
>> ./mpitest.data
>> (Filename can be specified via program argument)
>> Proc 0: hostname=Ron-Cohen-MBP.local
>> Proc 1: hostname=Ron-Cohen-MBP.local
>> MPI_BARRIER[0]: comm MPI_COMM_WORLD
>> MPI_BARRIER[1]: comm MPI_COMM_WORLD
>> Proc 0: MPI_File_open with MPI_MODE_EXCL failed (MPI_ERR_FILE:
>> invalid file)
>> MPI_ABORT[0]: comm MPI_COMM_WORLD errorcode 1
>> MPI_BCAST[1]: buffer 7fff5a483048 count 1 datatype MPI_INT root 0
>> comm MPI_COMM_WORLD
>>
>> I then went back to my openmpi directories and tried running some of
>> the individual tests in the test and examples directories.  In particular
>> in test/class I found one test that seem to not be run as part of make
>> check which failed, even with one processor; this is opal_bitmap.  Not 
>> sure
>> if this is because 1.7.4rc1 is incomplete, or there is something wrong 
>> with
>> the installation, or maybe a 32 vs 64 bit thing?   The error message is
>>
>> mpirun detected that one or more processes exited with non-zero
>> status, thus causing the job to be terminated. The first process to do so
>> was:
>>
>>   Process name: [[48805,1],0]
>>   Exit code:255
>>
>> Any suggestions?
>>
>> More generally has anyone out there gotten an openmpi build on
>> Mavericks to work with sufficient success that they can get the attached
>> Sample_mpio.c (or better yet, parallel HDF5) to build?
>>
>> Details: Running Mac OS X 10.9.1 on a mid-2009 Macbook pro with 4 GB
>> memory; tried openmpi 1.6.5 and 1.7.4rc1.  Built openm

Re: [OMPI users] Can't get a fully functional openmpi build on Mac OSX Mavericks

2014-01-15 Thread Ronald Cohen
Update: I reconfigured with enable_io_romio=yes, and this time -- mostly --
the test using Sample_mpio.c  passes.   Oddly the very first time I tried I
got errors:

% mpirun -np 2 sampleio
Proc 1: hostname=Ron-Cohen-MBP.local
Testing simple C MPIO program with 2 processes accessing file ./mpitest.data
(Filename can be specified via program argument)
Proc 0: hostname=Ron-Cohen-MBP.local
Proc 1: read data[0:1] got 0, expect 1
Proc 1: read data[0:2] got 0, expect 2
Proc 1: read data[0:3] got 0, expect 3
Proc 1: read data[0:4] got 0, expect 4
Proc 1: read data[0:5] got 0, expect 5
Proc 1: read data[0:6] got 0, expect 6
Proc 1: read data[0:7] got 0, expect 7
Proc 1: read data[0:8] got 0, expect 8
Proc 1: read data[0:9] got 0, expect 9
Proc 1: read data[1:0] got 0, expect 10
Proc 1: read data[1:1] got 0, expect 11
Proc 1: read data[1:2] got 0, expect 12
Proc 1: read data[1:3] got 0, expect 13
Proc 1: read data[1:4] got 0, expect 14
Proc 1: read data[1:5] got 0, expect 15
Proc 1: read data[1:6] got 0, expect 16
Proc 1: read data[1:7] got 0, expect 17
Proc 1: read data[1:8] got 0, expect 18
Proc 1: read data[1:9] got 0, expect 19
--
MPI_ABORT was invoked on rank 1 in communicator MPI_COMM_WORLD
with errorcode 1.

But when I reran the same mpirun command, the test was successful.   And
deleting the executable and recompiling and then again running the same
mpirun command, the test was successful.   Can someone explain that?




On Wed, Jan 15, 2014 at 1:16 PM, Ronald Cohen  wrote:

> Aha.   I guess I didn't know what the io-romio option does.   If you look
> at my config.log you will see my configure line included
> --disable-io-romio.Guess I should change --disable to --enable.
>
> You seem to imply that the nightly build is stable enough that I should
> probably switch to that rather than 1.7.4rc1.   Am I reading between the
> lines correctly?
>
>
>
> On Wed, Jan 15, 2014 at 10:56 AM, Ralph Castain  wrote:
>
>> Oh, a word of caution on those config params - you might need to check to
>> ensure I don't disable romio in them. I don't normally build it as I don't
>> use it. Since that is what you are trying to use, just change the "no" to
>> "yes" (or delete that line altogether) and it will build.
>>
>>
>>
>> On Wed, Jan 15, 2014 at 10:53 AM, Ralph Castain  wrote:
>>
>>> You can find my configure options in the OMPI distribution at
>>> contrib/platform/intel/bend/mac. You are welcome to use them - just
>>> configure --with-platform=intel/bend/mac
>>>
>>> I work on the developer's trunk, of course, but also run the head of the
>>> 1.7.4 branch (essentially the nightly tarball) on a fairly regular basis.
>>>
>>> As for the opal_bitmap test: it wouldn't surprise me if that one was
>>> stale. I can check on it later tonight, but I'd suspect that the test is
>>> bad as we use that class in the code base and haven't seen an issue.
>>>
>>>
>>>
>>> On Wed, Jan 15, 2014 at 10:49 AM, Ronald Cohen  wrote:
>>>
 Ralph,

 I just sent out another post with the c file attached.

 If you can get that to work, and even if you can't can you tell me what
 configure options you use, and what version of open-mpi?   Thanks.

 Ron


 On Wed, Jan 15, 2014 at 10:36 AM, Ralph Castain wrote:

> BTW: could you send me your sample test code?
>
>
> On Wed, Jan 15, 2014 at 10:34 AM, Ralph Castain wrote:
>
>> I regularly build on Mavericks and run without problem, though I
>> haven't tried a parallel IO app. I'll give yours a try later, when I get
>> back to my Mac.
>>
>>
>>
>> On Wed, Jan 15, 2014 at 10:04 AM, Ronald Cohen wrote:
>>
>>> I have been struggling trying to get a usable build of openmpi on
>>> Mac OSX Mavericks (10.9.1).  I can get openmpi to configure and build
>>> without error, but have problems after that which depend on the openmpi
>>> version.
>>>
>>> With 1.6.5, make check fails the opal_datatype_test, ddt_test, and
>>> ddt_raw tests.  The various atomic_* tests pass.See checklogs_1.6.5,
>>> attached as a .gz file.
>>>
>>> Following suggestions from openmpi discussions I tried openmpi
>>> version 1.7.4rc1.  In this case make check indicates all tests passed.  
>>> But
>>> when I proceeded to try to build a parallel code (parallel HDF5) it
>>> failed.  Following an email exchange with the HDF5 support people, they
>>> suggested I try to compile and run the attached bit of simple code
>>> Sample_mpio.c (which they supplied) which does not use any HDF5, but 
>>> just
>>> attempts a parallel write to a file and parallel read.   That test 
>>> failed
>>> when requesting more than 1 processor -- which they say indicates a 
>>> failure
>>> of the openmpi installation.   The error message was:
>>>
>>> MPI_INIT: argc 1
>>> MPI_INIT: argc 1
>>> Testing si