Re: [OMPI users] Unable to build Open MPI with external PMIx library support

2019-01-08 Thread Eduardo Rothe via users
Hi.

I'm sorry to reply so late. I was working in a special environment created by 
cowbuilder and I could not get the  config.log file. Finnaly, I was havinh 
several problems, but no one was related to Open MPI.

I thank you for you help.

Bye

Eduardo
 

On Tuesday, 18 December 2018, 02:06:33 CET, Gilles Gouaillardet 
 wrote:  
 
 Eduardo,


By config.log, we mean the config.log automatically generated by your 
configure command

(e.g. not the output of the configure command)

this is a huge file, so please compress it


Cheers,


Gilles


this file should start with

This file contains any messages produced by compilers while
running configure, to aid debugging if configure makes a mistake.

It was created by Open MPI configure 4.0.0, which was
generated by GNU Autoconf 2.69.  Invocation command line was




On 12/17/2018 7:29 PM, Eduardo Rothe via users wrote:
> Hi Howard,
>
> Thank you for you reply. I have just re-executed the whole process and 
> here is the config.log (in attachment to this message)!
>
> Just for restating, when I use internal PMIx I get the following error 
> while running mpirun (using Open MPI 4.0.0):
>
> --
> We were unable to find any usable plugins for the BFROPS framework. 
> This PMIx
> framework requires at least one plugin in order to operate. This can 
> be caused
> by any of the following:
>
> * we were unable to build any of the plugins due to some combination
>   of configure directives and available system support
>
> * no plugin was selected due to some combination of MCA parameter
>   directives versus built plugins (i.e., you excluded all the plugins
>   that were built and/or could execute)
>
> * the PMIX_INSTALL_PREFIX environment variable, or the MCA parameter
>   "mca_base_component_path", is set and doesn't point to any location
>   that includes at least one usable plugin for this framework.
>
> Please check your installation and environment.
> --
>
> Regards,
> Eduardo
>
>
> On Saturday, 15 December 2018, 18:35:44 CET, Howard Pritchard 
>  wrote:
>
>
> Hi Eduardo
>
> Could you post the config.log for the build with internal PMIx so we 
> can figure that out first.
>
> Howard
>
> Eduardo Rothe via users  > schrieb am Fr. 14. Dez. 2018 um 09:41:
>
>    Open MPI: 4.0.0
>    PMIx: 3.0.2
>    OS: Debian 9
>
>    I'm building a debian package for Open MPI and either I get the
>    following error messages while configuring:
>
>          undefined reference to symbol 'dlopen@@GLIBC_2.2.5'
>      undefined reference to symbol 'lt_dlopen'
>
>    when using the configure option:
>
>          ./configure --with-pmix=/usr/lib/x86_64-linux-gnu/pmix
>
>    or otherwise, if I use the following configure options:
>
>      ./configure --with-pmix=external
>    --with-pmix-libdir=/usr/lib/x86_64-linux-gnu/pmix
>
>    I have a successfull compile, but when running mpirun I get the
>    following message:
>
>    --
>    We were unable to find any usable plugins for the BFROPS
>    framework. This PMIx
>    framework requires at least one plugin in order to operate. This
>    can be caused
>    by any of the following:
>
>    * we were unable to build any of the plugins due to some combination
>      of configure directives and available system support
>
>    * no plugin was selected due to some combination of MCA parameter
>      directives versus built plugins (i.e., you excluded all the plugins
>      that were built and/or could execute)
>
>    * the PMIX_INSTALL_PREFIX environment variable, or the MCA parameter
>      "mca_base_component_path", is set and doesn't point to any location
>      that includes at least one usable plugin for this framework.
>
>    Please check your installation and environment.
>    --
>
>    What I find most strange is that I get the same error message
>    (unable to find
>    any usable plugins for the BFROPS framework) even if I don't
>    configure
>    external PMIx support!
>
>    Can someone please hint me about what's going on?
>
>    Cheers!
>    ___
>    users mailing list
>    users@lists.open-mpi.org 
>    https://lists.open-mpi.org/mailman/listinfo/users
>
>
> ___
> users mailing list
> users@lists.open-mpi.org
> https://lists.open-mpi.org/mailman/listinfo/users
___
users mailing list
users@lists.open-mpi.org
https://lists.open-mpi.org/mailman/listinfo/users  ___
users mailing list
users@lists.open-mpi.org
https://lists.open-mpi.org/mailman/listinfo/users

[OMPI users] Suggestion to add one thing to look/check for when running OpenMPI program

2019-01-08 Thread Ewen Chan
To Whom It May Concern:

Hello. I'm new here and I got here via OpenFOAM.

In the FAQ regarding running OpenMPI programs, specifically where someone might 
be able to run their OpenMPI program on a local node, but not a remote node, I 
would like to add the following for people to check:

Make sure that you can ssh between all of the notes, permutatively.

This often requires setting up passwordless ssh access so if you have two 
nodes, make sure that you can ssh into each other from each other without any 
issues.

I just had it happen where I forgot to perform this check and so it was trying 
to ennumerate the ECDSA key to ~/.ssh/known_hosts and that prompt was 
preventing OpenFOAM from running on my multi-node cluster.

Thank you.

Sincerely,

Ewen Chan
___
users mailing list
users@lists.open-mpi.org
https://lists.open-mpi.org/mailman/listinfo/users

[OMPI users] Increasing OpenMPI RMA win attach region count.

2019-01-08 Thread Udayanga Wickramasinghe
Hi,
I am running into an issue in open-mpi where it crashes abruptly
during MPI_WIN_ATTACH.

[nid00307:25463] *** An error occurred in MPI_Win_attach

[nid00307:25463] *** reported by process [140736284524545,140728898420736]

[nid00307:25463] *** on win rdma window 3

[nid00307:25463] *** MPI_ERR_RMA_ATTACH: Could not attach RMA segment

[nid00307:25463] *** MPI_ERRORS_ARE_FATAL (processes in this win will now
abort,

[nid00307:25463] ***and potentially your MPI job)


Looking more into this issue, it seems like open-mpi has a restriction on
the maximum number of segments attached to 32. (OpenMpi3.0 spec doesn't
spec doesn't say a lot about this scenario --"The argument win must be a
window that was created with MPI_WIN_CREATE_DYNAMIC. Multiple (but
nonoverlapping) memory regions may be attached to the same window")

To workaround this, I have temporarily modified the variable
mca_osc_rdma_component.max_attach. Is there any way to configure this in
open-mpi?

Thanks
Udayanga
___
users mailing list
users@lists.open-mpi.org
https://lists.open-mpi.org/mailman/listinfo/users

Re: [OMPI users] Increasing OpenMPI RMA win attach region count.

2019-01-08 Thread Udayanga Wickramasinghe
Sorry should be corrected as MPI3.0 spec [1]

[1] https://www.mpi-forum.org/docs/mpi-3.0/mpi30-report.pdf ; page=443

Best Regards,
Udayanga

On Tue, Jan 8, 2019 at 11:36 PM Udayanga Wickramasinghe 
wrote:

> Hi,
> I am running into an issue in open-mpi where it crashes abruptly
> during MPI_WIN_ATTACH.
>
> [nid00307:25463] *** An error occurred in MPI_Win_attach
>
> [nid00307:25463] *** reported by process [140736284524545,140728898420736]
>
> [nid00307:25463] *** on win rdma window 3
>
> [nid00307:25463] *** MPI_ERR_RMA_ATTACH: Could not attach RMA segment
>
> [nid00307:25463] *** MPI_ERRORS_ARE_FATAL (processes in this win will now
> abort,
>
> [nid00307:25463] ***and potentially your MPI job)
>
>
> Looking more into this issue, it seems like open-mpi has a restriction on
> the maximum number of segments attached to 32. (OpenMpi3.0 spec doesn't
> spec doesn't say a lot about this scenario --"The argument win must be a
> window that was created with MPI_WIN_CREATE_DYNAMIC. Multiple (but
> nonoverlapping) memory regions may be attached to the same window")
>
> To workaround this, I have temporarily modified the variable
> mca_osc_rdma_component.max_attach. Is there any way to configure this in
> open-mpi?
>
> Thanks
> Udayanga
>
___
users mailing list
users@lists.open-mpi.org
https://lists.open-mpi.org/mailman/listinfo/users