You can set this MCA var on a site-wide basis in a file:

    https://www.open-mpi.org/faq/?category=tuning#setting-mca-params



> On Jan 9, 2019, at 1:18 PM, Udayanga Wickramasinghe <uswic...@iu.edu> wrote:
> 
> Thanks. Yes, I am aware of that however, I currently have a requirement to 
> increase the default. 
> 
> Best,
> Udayanga
> 
> On Wed, Jan 9, 2019 at 9:10 AM Nathan Hjelm via users 
> <users@lists.open-mpi.org> wrote:
> If you need to support more attachments you can set the value of that 
> variable either by setting:
> 
> Environment:
> 
> OMPI_MCA_osc_rdma_max_attach
> 
> 
> mpirun command line:
> 
> —mca osc_rdma_max_attach
> 
> 
> Keep in mind that each attachment may use an underlying hardware resource 
> that may be easy to exhaust (hence the low default limit). It is recommended 
> to keep the total number as small as possible.
> 
> -Nathan
> 
> > On Jan 8, 2019, at 9:36 PM, Udayanga Wickramasinghe <uswic...@iu.edu> wrote:
> > 
> > Hi,
> > I am running into an issue in open-mpi where it crashes abruptly during 
> > MPI_WIN_ATTACH. 
> > [nid00307:25463] *** An error occurred in MPI_Win_attach
> > [nid00307:25463] *** reported by process [140736284524545,140728898420736]
> > [nid00307:25463] *** on win rdma window 3
> > [nid00307:25463] *** MPI_ERR_RMA_ATTACH: Could not attach RMA segment
> > [nid00307:25463] *** MPI_ERRORS_ARE_FATAL (processes in this win will now 
> > abort,
> > [nid00307:25463] ***    and potentially your MPI job)
> > 
> > Looking more into this issue, it seems like open-mpi has a restriction on 
> > the maximum number of segments attached to 32. (OpenMpi3.0 spec doesn't 
> > spec doesn't say a lot about this scenario --"The argument win must be a 
> > window that was created with MPI_WIN_CREATE_DYNAMIC. Multiple (but 
> > nonoverlapping) memory regions may be attached to the same window")
> > 
> > To workaround this, I have temporarily modified the variable 
> > mca_osc_rdma_component.max_attach. Is there any way to configure this in 
> > open-mpi?
> > 
> > Thanks
> > Udayanga
> > _______________________________________________
> > users mailing list
> > users@lists.open-mpi.org
> > https://lists.open-mpi.org/mailman/listinfo/users
> 
> _______________________________________________
> users mailing list
> users@lists.open-mpi.org
> https://lists.open-mpi.org/mailman/listinfo/users
> _______________________________________________
> users mailing list
> users@lists.open-mpi.org
> https://lists.open-mpi.org/mailman/listinfo/users


-- 
Jeff Squyres
jsquy...@cisco.com

_______________________________________________
users mailing list
users@lists.open-mpi.org
https://lists.open-mpi.org/mailman/listinfo/users

Reply via email to