FYI: discussions of Open MPI internals should be on the de...@open-mpi.org list, not the us...@open-mpi.org list. I mention this because not all OMPI developers are on the users list.
On Sep 9, 2013, at 3:59 AM, Max Staufer <max.stau...@gmx.net> wrote: > I am still working on a small example that shows the problem, > our problematic call is part of a fairly extensive framework so its not easy > to post that part, but see below. > > As you can see the subroutine is recursive and will be calling itself again > depending on the outcome posted here. > The MPI_ALLREDUCE of dum(3) is the part that causes the ompi_free_list to > grow. > > Is there an MCA parameter to limit the groth of the ompi_free_list ? > > Max > > ----------- > RECURSIVE SUBROUTINE setup(l,n,listrank) > ! > ! > USE dagmgpar_mem > IMPLICIT NONE > INTEGER :: l,n > INTEGER, OPTIONAL :: listrank(n+1:*) > INTEGER :: nc,ierr,i,j,k,nz > LOGICAL :: slcoarse > INTEGER, POINTER, DIMENSION(:) :: jap > REAL(kind(0.0d0)), POINTER, DIMENSION(:) :: ap > LOGICAL, SAVE :: slowcoarse > REAL(kind(0.0d0)) :: fw,eta,dum(3),dumsend(3) > #ifdef WITHOUTINPLACE > REAL(kind(0.0d0)) :: dumbuffer(3) > #endif > CHARACTER(len=13) :: prtint > REAL (kind(0.0d0)) :: fff(1) > ! > nn(l)=n > nlc(1)=n > IF (n > 0) THEN > nlc(2)=dt(l)%ia(n+1)-dt(l)%ia(1) > ELSE > nlc(2)=0 > END IF > ngl=nlc > IF (l==2) slowcoarse=.FALSE. > slcoarse = 2*nlcp(1) < 3*nlc(1) .AND. 2*nlcp(2) < 3*nlc(2) > IF( l == nstep+1 .OR. l == maxlev & > .OR. ( ngl(1) <= maxcoarset) & > .OR. ( nglp(1) < 2*ngl(1) .AND. nglp(2) < 2*ngl(2) & > .AND. ngl(1) <= maxcoarseslowt ) & > .OR. ( slowcoarse .AND. slcoarse ) & > .OR. nglp(1) == ngl(1) ) THEN > nlev=l > dumsend(3)=-1.0d0 > ELSE > dumsend(3)=dble(NPROC) > END IF > dumsend(1:2)=dble(nlc) > #ifdef WITHOUTINPLACE > dumbuffer = dum > CALL MPI_ALLREDUCE(dumbuffer,dum,3,MPI_DOUBLE_PRECISION, & > MPI_SUM,ICOMM,ierr) > #else > CALL MPI_ALLREDUCE(dumsend,dum,3,MPI_DOUBLE_PRECISION, & > MPI_SUM,ICOMM,ierr) > #endif > ngl=dum(1:2) > IF (dum(3) .LE. 0.0d0) nlev=l > slowcoarse=slcoarse > > ... >> Yes, the number of elements each freelist accepts to allocate can be >> bounded. However, we need to know which freelist we should act upon. >> >> What exactly you means by "MPI_ALLREDUCE is called in a recursive way"? You >> mean inside a loop right? >> >> George. >> >> >> On Sep 8, 2013, at 21:36 , Max Staufer <max.stau...@gmx.net> wrote: >> >>> I will post a small example for testing. >>> >>> It is interesting to note though that this happens only >>> >>> when MPI_ALLREDUCE is called in a recursive kind of way. >>> >>> Is there a possibility to limit the OMPI_free_list groth, via an --mca >>> parameter ? >>> >>> >>> >>> >>> >>> >>> >>> Date: Sun, 08 Sep 2013 14:51:44 +0200 >>> From: Max Staufer <max.stau...@gmx.net> >>> To: us...@open-mpi.org >>> Subject: [OMPI users] OMPI_LIST_GROW keeps allocating memory >>> Message-ID: <522c72e0.9000...@gmx.net> >>> Content-Type: text/plain; charset=ISO-8859-15 >>> >>> Hi All, >>> >>> using ompi 1.4.5 or 1.6.5 for that matter, I came across an >>> interesting thing >>> >>> when an MPI function is called from in a recusivly called subroutine >>> (Fortran Interface) >>> the MPI_ALLREDUCE function allocates memory in the OMPI_LIST_GROW functions. >>> >>> It does this indefinitly. In our case OMPI allocated 100GB. >>> >>> is there a method to limit this behaviour ? >>> >>> thanks >>> >>> Max >>> >>> _______________________________________________ >>> users mailing list >>> us...@open-mpi.org >>> http://www.open-mpi.org/mailman/listinfo.cgi/users >> _______________________________________________ >> users mailing list >> us...@open-mpi.org >> http://www.open-mpi.org/mailman/listinfo.cgi/users > > _______________________________________________ > users mailing list > us...@open-mpi.org > http://www.open-mpi.org/mailman/listinfo.cgi/users -- Jeff Squyres jsquy...@cisco.com For corporate legal information go to: http://www.cisco.com/web/about/doing_business/legal/cri/