[OMPI users] CfP Workshop on XEN in HPC Cluster and Grid Computing Environments (XHPC)

2006-06-12 Thread Michael Alexander



===
CALL FOR PAPERS (XHPC'06)

Workshop on XEN in High-Performance Cluster and Grid Computing
Environments as part of:

The Fourth International Symposium on Parallel and Distributed
Processing and Applications (ISPA'2006). Sorrento, Italy
===

List-Post: users@lists.open-mpi.org
Date: 1-4 December 2006

ISPA'2006:  http://www.ispa-conference.org/2006/
Workshop URL: http://xhpc.ai.wu-wien.ac.at/ws/

(due date: August 4, 2006)


Scope:
The Xen virtual machine monitor is reaching wide-spread adoption
in a variety of operating systems as well as scientific educational
and operational usage areas. With its low overhead, Xen allows for
concurrently running large numbers of virtual machines, providing
each encapsulation, isolation and network-wide CPU migratability.
Xen offers a network-wide abstraction layer of individual machine
resources to OS environments, thereby opening whole new cluster-and
grid high-performance computing (HPC) architectures and HPC services
options. With Xen finding applications in HPC environments, this
workshop aims to bring together researchers and practitioners active
on Xen in high-performance cluster and grid computing environments.

The workshop will be one day in length, composed of 20 min paper
presentations, each followed by 10 min discussion sections.
Presentations may be accompanied with interactive demonstrations.
The workshop will end with a 30 min panel discussion by presenters.


TOPICS

Topics include, but are not limited to, the following subject matters:

  - Xen in cluster and grid environments
  - Workload characterizations for Xen-based clusters
  - Xen cluster and grid architectures
  - Cluster reliability, fault-tolerance, and security
  - Compute job entry and scheduling
  - Compute workload load levelling
  - Cluster and grid filesystems for Xen
  - Research and education use cases
  - VM cluster distribution algorithms
  - MPI, PVM  on virtual machines
  - System sizing
  - High-speed interconnects in Xen
  - Xen extensions and utilities for cluster and grid computing
  - Network architectures for Xen clusters
  - Xen on large SMP machines
  - Measuring performance
  - Performance tuning of Xen domains
  - Xen performance tuning on various load types
  - Xen cluster/grid tools
  - Management of Xen clusters


PAPER SUBMISSION

Papers submitted to each workshop will be reviewed by at least three
members of the program committee and external reviewers. Submissions
should include abstract, key words, the e-mail address of the
corresponding author, and must not exceed 15 pages, including tables
and figures, and preferably be in LaTeX or FrameMaker, although
submissions in the LNCS Word format will be accepted as well.
Electronic submission through the submission website is strongly
encouraged. Hardcopies will be accepted only if electronic submission
is not possible. Submission of a paper should be regarded as a
commitment that, should the paper be accepted, at least one of the
authors will register and attend the conference to present the work.
An award for best student paper will be given.

http://isda2006.ujn.edu.cn/isda/author/submit.php

Format should be according to the Springer LNCS Style
http://www.springer.de/comp/lncs/authors.html

It is expected that the proceedings of the workshop programs will
be published by Springer's LNCS series or IEEE CS.


IMPORTANT DATES

Paper submission due: August 4, 2006
Acceptance notification: September 1, 2006
Camera-ready due: September 20, 2006
Conference: December 1-4, 2006


CHAIR

Michael Alexander (chair), WU Vienna, Austria
Geyong Min (co-chair), University of Bradford, UK
Gudula Ruenger (co-chair), Chemnitz University of Technology, Germany


PROGRAM COMMITTEE

Franck Cappello, CNRS-Université Paris-Sud, France
Claudia Eckert, Fraunhofer-Institute, Germany
Rob Gardner, HP Labs, USA
Marcus Hardt, Forschungszentrum Karlsruhe, Germany
Sverre Jarp, CERN, Switzerland
Thomas Lange, University of Cologne, Germany
Ronald Luijten, IBM Research Laboratory, Zurich, Switzerland
Klaus Ita, WU Vienna, Austria
Franco Travostino, Nortel CTO Office, USA
Andreas Unterkircher, CERN, Switzerland


GENERAL INFORMATION

This workshop will be held as part of ISPA 2006 in Sorrento, Italy -
http://www.sorrentoinfo.com/sorrento/sorrento_italy.asp.











Re: [OMPI users] F90 interfaces again

2006-06-12 Thread Michael Kluskens

On Jun 9, 2006, at 12:33 PM, Brian W. Barrett wrote:


On Thu, 8 Jun 2006, Michael Kluskens wrote:


call MPI_WAITALL(3,sp_request,MPI_STATUSES_IGNORE,ier)
  1
Error: Generic subroutine 'mpi_waitall' at (1) is not consistent with
a specific subroutine interface

Issue, 3rd argument of MPI_WAITALL expects an integer array normally,
but this constant is permitted by the standard.

This is with OpenMPI 1.2a1r10186,  I can check the details of the
scripts and generated files next week for whatever is the latest
version.  But odds are this has not been spotted.


Michael -

Which compiler are you using?  I'm trying to replicate, and  
gfortran isn't getting mad at me (which is weird, because I would  
have expected it to get very angry at me).


I'm using g95 and I configure with:

./configure F77=g95 FC=g95 LDFLAGS=-lSystemStubs --with-mpi-f90- 
size=large --enable-static --with-f90-max-array-dim=3


Downloaded "openmpi-1.2a1r10297" today.

Looking at the scripts I believe this problem was fixed between 10186  
and 10297.


I can't test until I check for and if needed implement my fixes to  
the other interfaces I mentioned previously.


Michael




Re: [OMPI users] F90 interfaces again

2006-06-12 Thread Jeff Squyres (jsquyres)
I think that we've fixed everything with respect to f90 except the "large" 
interface.  Let us know if we either missed something or you find something new.

Thanks!

 -Original Message-
From:   Michael Kluskens [mailto:mklusk...@ieee.org]
Sent:   Mon Jun 12 09:47:52 2006
To: Open MPI Users
Subject:Re: [OMPI users] F90 interfaces again

On Jun 9, 2006, at 12:33 PM, Brian W. Barrett wrote:

> On Thu, 8 Jun 2006, Michael Kluskens wrote:
>
>> call MPI_WAITALL(3,sp_request,MPI_STATUSES_IGNORE,ier)
>>   1
>> Error: Generic subroutine 'mpi_waitall' at (1) is not consistent with
>> a specific subroutine interface
>>
>> Issue, 3rd argument of MPI_WAITALL expects an integer array normally,
>> but this constant is permitted by the standard.
>>
>> This is with OpenMPI 1.2a1r10186,  I can check the details of the
>> scripts and generated files next week for whatever is the latest
>> version.  But odds are this has not been spotted.
>
> Michael -
>
> Which compiler are you using?  I'm trying to replicate, and  
> gfortran isn't getting mad at me (which is weird, because I would  
> have expected it to get very angry at me).

I'm using g95 and I configure with:

./configure F77=g95 FC=g95 LDFLAGS=-lSystemStubs --with-mpi-f90- 
size=large --enable-static --with-f90-max-array-dim=3

Downloaded "openmpi-1.2a1r10297" today.

Looking at the scripts I believe this problem was fixed between 10186  
and 10297.

I can't test until I check for and if needed implement my fixes to  
the other interfaces I mentioned previously.

Michael


___
users mailing list
us...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/users


[OMPI users] Why does it suddenly not run?

2006-06-12 Thread Jens Klostermann
This morning I was running 

mpirun -v --mca btl mvapi,self -np 12 --hostfile ompimachinefile
oodles . les_test1100k -parallel >> ./les_test1100k/log12 &

with openmpi-1.2a1r10111 and everything worked and still works as
expected.

Now I tried to start a second (very same) job with the following error
message as a result:

[stokes:29489] [0,0,0] ORTE_ERROR_LOG: Error in file
runtime/orte_init_stage1.c at line 302
--
It looks like orte_init failed for some reason; your parallel process is
likely to abort.  There are many reasons that a parallel process can
fail during orte_init; some of which are due to configuration or
environment problems.  This failure appears to be an internal failure;
here's some additional information (which may only be relevant to an
Open MPI developer):

  orte_session_dir failed
  --> Returned value -1 instead of ORTE_SUCCESS

--
[stokes:29489] [0,0,0] ORTE_ERROR_LOG: Error in file
runtime/orte_system_init.c at line 42
[stokes:29489] [0,0,0] ORTE_ERROR_LOG: Error in file runtime/orte_init.c
at line 49
--
Open RTE was unable to initialize properly.  The error occured while
attempting to orte_init().  Returned value -1 instead of ORTE_SUCCESS.

Has anybody an idea what the error might be or how to trag it down?

Regards Jens





Re: [OMPI users] Why does openMPI abort processes?

2006-06-12 Thread Brian Barrett
On Sun, 2006-06-11 at 04:26 -0700, imran shaik wrote:
> Hi,
> I some times get this error message.
> " 2 addtional processes aborted, possibly by openMPI"
>  
> Some times 2 processes, sometimes even more. Is it due to over load or
> program error?
>  
> Why does openMPI actually abort few processes?
>  
> Can anyone explain?

Generally, this is because multiple processes in your job aborted
(exited with a signal or before MPI_FINALIZE) and mpirun only prints the
first abort message.  You can modify how many abort status messages you
want to receive with the -aborted X option to mpirun, where X is the
number of process abort messages you want to see.  The message generally
includes some information on what happened to your process.


Hope this helps,

Brian