Why is this allocated statically? I dont understand the difficulty of a
dynamically allocates and thus unrestricted implementation. Is there some
performance advantage to a bounded static allocation?  Or is it that you
use O(n) lookups and need to keep n small to avoid exposing that to users?

I have usage models with thousands of attached segments, hence need to
understand how bad this will be with Open-MPI (yes I can amortize overhead
but it’s a pain).

Thanks,

Jeff

On Wed, Jan 9, 2019 at 8:12 AM Nathan Hjelm via users <
users@lists.open-mpi.org> wrote:

> If you need to support more attachments you can set the value of that
> variable either by setting:
>
> Environment:
>
> OMPI_MCA_osc_rdma_max_attach
>
>
> mpirun command line:
>
> —mca osc_rdma_max_attach
>
>
> Keep in mind that each attachment may use an underlying hardware resource
> that may be easy to exhaust (hence the low default limit). It is
> recommended to keep the total number as small as possible.
>
> -Nathan
>
> > On Jan 8, 2019, at 9:36 PM, Udayanga Wickramasinghe <uswic...@iu.edu>
> wrote:
> >
> > Hi,
> > I am running into an issue in open-mpi where it crashes abruptly during
> MPI_WIN_ATTACH.
> > [nid00307:25463] *** An error occurred in MPI_Win_attach
> > [nid00307:25463] *** reported by process
> [140736284524545,140728898420736]
> > [nid00307:25463] *** on win rdma window 3
> > [nid00307:25463] *** MPI_ERR_RMA_ATTACH: Could not attach RMA segment
> > [nid00307:25463] *** MPI_ERRORS_ARE_FATAL (processes in this win will
> now abort,
> > [nid00307:25463] ***    and potentially your MPI job)
> >
> > Looking more into this issue, it seems like open-mpi has a restriction
> on the maximum number of segments attached to 32. (OpenMpi3.0 spec doesn't
> spec doesn't say a lot about this scenario --"The argument win must be a
> window that was created with MPI_WIN_CREATE_DYNAMIC. Multiple (but
> nonoverlapping) memory regions may be attached to the same window")
> >
> > To workaround this, I have temporarily modified the variable
> mca_osc_rdma_component.max_attach. Is there any way to configure this in
> open-mpi?
> >
> > Thanks
> > Udayanga
> > _______________________________________________
> > users mailing list
> > users@lists.open-mpi.org
> > https://lists.open-mpi.org/mailman/listinfo/users
>
> _______________________________________________
> users mailing list
> users@lists.open-mpi.org
> https://lists.open-mpi.org/mailman/listinfo/users

-- 
Jeff Hammond
jeff.scie...@gmail.com
http://jeffhammond.github.io/
_______________________________________________
users mailing list
users@lists.open-mpi.org
https://lists.open-mpi.org/mailman/listinfo/users

Reply via email to