Dear Wolfgang, I just noticed a minor mistake on the graph: two of the arrows are slightly misplaced. While this does not change the main message, I am still sending you the corrected figure.
Lucas On 7 December 2017 at 16:28, Lucas Campos <rmk...@gmail.com> wrote: > Dear Wolfgang, > > I am sorry for not being clear. > > > Lucas -- I don't think I entirely understand the question. Where is the > time allocating memory spent? When you build the matrix, or when MUMPS > works on it to compute the LU decomposition? > > I mean that the LU Decomposition solver takes a huge amount of RAM, and it > seems to me that allocating that once and reusing the space would be > better. Attached you can find a simple graph* showing how the free memory > in time. I ran an instance of my program using around 164k cells, running > on 7 threads. As you can see, the solving step consumes a lot of RAM, and > then deallocates it after the solver finishes. What I wonder is if it is > useful and possible to just do this allocation/freeing once, at the start > of the program. > > > You have no control over what MUMPS does -- it's a black box from our > perspective. Or do you know whether PETSc allows setting parameters that > are then passed to MUMPS? > > I don't know if PETSc allows to change MUMPS' configuration. In fact, I > only ever used PETSc via deal.II. What I understood from the documentation > was that UMFPack would allow me to use a single allocation, but currently I > am not 100% how to make it play nice with PETSc interface and I want to > check if there is a simpler/more direct way to do before diving into it. > > *: This graph is less than scientific. I simply ran free every 5 seconds > while my program ran. But I think it gets the point across. > > Bests, > Lucas > > On 7 December 2017 at 15:40, Wolfgang Bangerth <bange...@colostate.edu> > wrote: > >> On 12/07/2017 03:12 AM, Lucas Campos wrote: >> >>> >>> Currently I am using a direct LU solver via PETSc/MUMPS to solve my >>> matrix. However, I have noticed that I spend a lot of time in allocation, >>> at every step. Is it possible (or useful) to preallocate the internal >>> structures necessary to solve the matrix? According to [1], it is possible >>> to do it if I use UMFPack, but it seems I would need to change a bit more >>> code to still work with MPI, so would be simpler to do it while using >>> PETSc/MUMPS. >>> >> >> Lucas -- I don't think I entirely understand the question. Where is the >> time allocating memory spent? When you build the matrix, or when MUMPS >> works on it to compute the LU decomposition? >> >> You have no control over what MUMPS does -- it's a black box from our >> perspective. Or do you know whether PETSc allows setting parameters that >> are then passed to MUMPS? >> >> Best >> W. >> >> -- >> ------------------------------------------------------------------------ >> Wolfgang Bangerth email: bange...@colostate.edu >> www: http://www.math.colostate.edu/~bangerth/ >> >> >> -- >> The deal.II project is located at http://www.dealii.org/ >> For mailing list/forum options, see https://groups.google.com/d/fo >> rum/dealii?hl=en >> --- You received this message because you are subscribed to a topic in >> the Google Groups "deal.II User Group" group. >> To unsubscribe from this topic, visit https://groups.google.com/d/to >> pic/dealii/yje1MbYdlfc/unsubscribe. >> To unsubscribe from this group and all its topics, send an email to >> dealii+unsubscr...@googlegroups.com. >> For more options, visit https://groups.google.com/d/optout. >> > > -- The deal.II project is located at http://www.dealii.org/ For mailing list/forum options, see https://groups.google.com/d/forum/dealii?hl=en --- You received this message because you are subscribed to the Google Groups "deal.II User Group" group. To unsubscribe from this group and stop receiving emails from it, send an email to dealii+unsubscr...@googlegroups.com. For more options, visit https://groups.google.com/d/optout.