[OMPI users] 5th ALCHEMY workshop on Many-core programming issues (part of ICCS 2017) CfP

2017-01-06 Thread CUDENNEC Loic

Call for Paper

5th ALCHEMY Workshop, as part of ICCS 2017

   Architecture, Languages, Compilation and
Hardware support for Emerging ManYcore systems


Important Dates:
   Submission deadline: 31st January, 2017
   Notification to authors: 10th March, 2017
   Venue: 12-14th June, 2017, Zurich, Switzerland


  As the future aims toward increasing parallelism and heterogeneity of
systems to tackle the so-called power-wall while permitting a roadmap
of increased performance, several challenges rise for programming such
systems. The ALCHEMY workshop goal is to show some of these relevant
challenges and finding ways to tackle them, while permitting programmers
to focus on important part of application designs and letting compilers
or runtime optimization do most of the work toward good performance.

The ALCHEMY workshop is the Many-core track of ICCS. It is also a good
place of exchange between the traditional HPC domain of research and
all the emerging HPES (High Performance Embedded Systems) domain, since
the programming issues are mostly the same, with a relatively high
cost of communication and the difficulty to program hundreds of cores
often under performance and power usage constraints.

Original high quality submission are encouraged on all topics related to
many-core programming issues including (but not limited to):
  * Programming models and languages for many-cores
  * Compilers for programming languages
  * Runtime generation for parallel programming on manycores
  * Architecture support for massive parallelism management
  * Enhanced communications for CMP/manycores
  * Shared memory, data consistency models and protocols
  * New operating systems, or dedicated OS
  * Security, crypto systems for manycores
  * User feedback on existing manycore architectures
(experiments with Adapteva Epiphany, Intel Phi, Kalray MPPA, ST
 STHorm, Tilera Gx, TSAR..etc)
  * Many-core integration within HPC systems (micro-servers)

Authors should submit their paper through Easychair, using a maximum of
10 pages
(single column) Procedia style for full papers, or alternatively 2 to 4
pages
for short papers (Poster and presentation, or presentation only).

Submission link:
https://easychair.org/conferences/submission_new.cgi?a=13543739
(paper templates are available on the submission page (top right)

ALCHEMY track site: https://sites.google.com/site/alchemyworkshop/

== Program Committee (to be extended) ==
* Antoniu   Pop, Univ. of Manchester (UK)
* Jason Riedy, Georgia Tech (USA)
* Camille   Coti, Université de Paris-Nord  (France)
* Erwan Piriou, CEA, LIST (France)
* Vianney   Lapotre, Université de Bretagne-Sud (France)
* Eric  Petit, Intel (France)
* Sven  Karol, TU Dresden (Germany)
* Diana Göhringer, Universitaet Bochum (Germany)
* Emil  Matus, TU Dresden (Germany)

== Program Chairing ==
* Martha JohannaSepulveda Flores, TU München (Germany)
* Jeronimo  Castrillon, TU Dresden (Germany) 
* Vania Marangozova-Martin, IMAG Grenoble (France)

== General Chair ==
* Loic Cudennec, CEA LIST (France)
* Stephane Louise, CEA LIST (France)
___
users mailing list
users@lists.open-mpi.org
https://rfd.newmexicoconsortium.org/mailman/listinfo/users


Re: [OMPI users] mca_based_component warnings when running simple example code

2017-01-06 Thread Gilles Gouaillardet
Hi,

it looks like you installed Open MPI 2.0.1 at the same location than
previous Open MPI 1.10, but you did not uninstall v1.10.
the faulty modules have very been likely removed from 2.0.1, hence the error.
you can simply remove the openmpi plugins directory and reinstall openmpi

rm -rf /usr/local/lib/openmpi
make install

Cheers,

Gilles

On Fri, Jan 6, 2017 at 6:30 PM, Solal Amouyal  wrote:
> FYI: It's my first time posting here and I'm quit a beginner in MPI.
>
> I recently updated my gfortran (from 4.7 to 6.1.0) and OpenMPI (from 1.10 to
> 2.0.1) compilers. Since then, I've been getting warnings at the beginning of
> the simulation (one set of warnings per processor).
>
> I reduced my code to the most MPI program and the warnings persist.
>
> program main
> use mpi_f08
> implicit none
> integer :: ierror
>
> call mpi_init(ierror)
> call mpi_finalize(ierror)
> end program main
>
> I compile my code with mpif90 main.f90, and run it either directly - ./a.out
> - or with mpirun: mpirun -np 1 ./a.out. The output is the same:
>
> [username:79762] mca_base_component_repository_open: unable to open
> mca_grpcomm_bad: dlopen(/usr/local/lib/openmpi/mca_grpcomm_bad.so, 9):
> Symbol not found: _orte_grpcomm_base_modex
>   Referenced from: /usr/local/lib/openmpi/mca_grpcomm_bad.so
>   Expected in: flat namespace
>  in /usr/local/lib/openmpi/mca_grpcomm_bad.so (ignored)
> [username:79761] mca_base_component_repository_open: unable to open
> mca_grpcomm_bad: dlopen(/usr/local/lib/openmpi/mca_grpcomm_bad.so, 9):
> Symbol not found: _orte_grpcomm_base_modex
>   Referenced from: /usr/local/lib/openmpi/mca_grpcomm_bad.so
>   Expected in: flat namespace
>  in /usr/local/lib/openmpi/mca_grpcomm_bad.so (ignored)
> [username:79761] mca_base_component_repository_open: unable to open
> mca_pml_bfo: dlopen(/usr/local/lib/openmpi/mca_pml_bfo.so, 9): Symbol not
> found: _ompi_free_list_item_t_class
>   Referenced from: /usr/local/lib/openmpi/mca_pml_bfo.so
>   Expected in: flat namespace
>  in /usr/local/lib/openmpi/mca_pml_bfo.so (ignored)
> [username:79761] mca_base_component_repository_open: coll
> "/usr/local/lib/openmpi/mca_coll_hierarch" uses an MCA interface that is not
> recognized (component MCA v2.0.0 != supported MCA v2.1.0) -- ignored
> [username:79761] mca_base_component_repository_open: unable to open
> mca_coll_ml: dlopen(/usr/local/lib/openmpi/mca_coll_ml.so, 9): Symbol not
> found: _mca_bcol_base_components_in_use
>   Referenced from: /usr/local/lib/openmpi/mca_coll_ml.so
>   Expected in: flat namespace
>  in /usr/local/lib/openmpi/mca_coll_ml.so (ignored)
>
> I ran and attached the output of ompi_info --all.
>
> I can see in one of the warnings that there's a MCA version mismatch between
> v2.0.0 and 2.0.1. I don't know if it might be related but I made sure to
> upgrade my OpenMPI after gfortran. I am using OSX10.11.4
>
> Thank you,
>
> _
>
> Solal
>
> ___
> users mailing list
> users@lists.open-mpi.org
> https://rfd.newmexicoconsortium.org/mailman/listinfo/users
___
users mailing list
users@lists.open-mpi.org
https://rfd.newmexicoconsortium.org/mailman/listinfo/users