Lucas,
I mean that the LU Decomposition solver takes a huge amount of RAM, and it
seems to me that allocating that once and reusing the space would be better.
Attached you can find a simple graph* showing how the free memory in time. I
ran an instance of my program using around 164k cells, running on 7 threads.
As you can see, the solving step consumes a lot of RAM, and then deallocates
it after the solver finishes. What I wonder is if it is useful and possible to
just do this allocation/freeing once, at the start of the program.
I don't think it matters. First, you can't know how much memory you're going
to use if you do a sparse LU decomposition. It all depends on the sparsity
pattern of the matrix. Second, MUMPS does this internally -- I don't think you
have control over it. Third, sparse decompositions are so expensive to compute
that the cost of memory allocation is likely completely negligible.
I think there's little to gain from trying to tweak the memory allocation part
of this. The fact that MUMPS takes a lot of memory is also nothing you can
change -- that's just what you get for using a direct solver.
Nice graph, though. It clearly shows what's going on!
Cheers
W.
--
------------------------------------------------------------------------
Wolfgang Bangerth email: bange...@colostate.edu
www: http://www.math.colostate.edu/~bangerth/
--
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see
https://groups.google.com/d/forum/dealii?hl=en
---
You received this message because you are subscribed to the Google Groups "deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email
to dealii+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.