Hi Wolfgang,

thanks a ton for taking a look at my silly example!
- It seems like I am now finally really at a point where an update is 
inevitable! ; )
... maybe just rewriting my codes to fit a newer version would have saved 
me loads of time in the long run,
but I am happy that remains a mistery.

So, I will update and comment here afterwards; 
also, I will work on the mwe as you suggested in case the upgrade does not 
get rid of the problem.

Regards,
Richard

Am Dienstag, 18. August 2020 01:51:41 UTC+2 schrieb Wolfgang Bangerth:
>
>
> Richard, 
>
> > I am working on incompressible flow problems and stumbled upon an issue 
> when 
> > calling PETScWrappers::SparseMatrix::mmult(), 
> > but before I describe the problem in more detail, let me comment on the 
> basic 
> > building blocks of the MWE: 
> > 
> > (i) parallel::distributed::Triangulation & either PETSc or Trilinos 
> linear 
> > algebra packages 
> > (ii) dim-dimensional vector-valued FE space for velocity components & 
> > scalar-valued FE space for pressure, 
> > simply constructed via: 
> > FESystem<dim> fe (FE_Q<dim>(vel_degree), dim, 
> >                                   FE_Q<dim>(press_degree), 1); 
> > 
> > So, after integrating the weak form -- or just filling the matrices with 
> some 
> > entries -- we end up with a block system 
> > A u + B p = f 
> > C u + D p = g. 
> > To construct some preconditioners, we have to perform some matrix-matrix 
> products: 
> > either for the Schur complement 
> > (a) S = D - C inv(diag(A)) B 
> > or some A_gamma 
> > (b) A_gamma = A + gamma * B inv(diag(Mp)) C. 
> > 
> > Comletely ignoring now, why that might be necessary or not 
> > (I know that there is the possibility of assembling a grad-div term and 
> using a 
> > Laplacian on the pressure space to account for the reaction term, 
> > but that is not really an option in the stabilized case), 
> > we need those matrix products, and here comes the problem: 
> > 
> > using either PETSc or Trilinos I get identical matrix-products when 
> calling 
> > mmult(), BUT 
> > when using PETSc, the RAM is slowly but steadily filled (up to 500GB on 
> our 
> > local cluster) 
> > 
> > I came up with the MWE attached, which does nothing else than 
> initializing the 
> > system 
> > and then constructs the matrix product 1000 times in a row. 
>
> Nice! Can I ask you to play with this some more? I think you can make that 
> code even more minimal: 
> * Remove all of the commented out stuff -- for the purposes of reproducing 
> the 
> problem it shouldn't matter. 
> * Move the matrix initialization code out of the main loop. You want to 
> show 
> that it's the mmult that is the problem, but you're having a lot of other 
> code 
> in there as well that could in principle be the issue. If you move the 
> initialization of the individual factors out of the loop and only leave 
> whatever is absolutely necessary for the mmult in the loop, then you've 
> reduced the number of places where one needs to look. 
> * I bet you could trim down the list of #includes by a good bit :-) 
>
> You seem to be using a pretty old version of deal.II. There are a number 
> of 
> header files that no longer exist, and some code that doesn't compile for 
> me. 
> For your reference, attached is a version that compiles on the current 
> master 
> branch (though with a number of warnings). That said, it seems like the 
> memory 
> doesn't explode for me -- which raises the question of which version of 
> deal.II and PETSc you use. For me, this is deal.II dev and PETSc 3.7.5. 
>
>
> > Am I doing anything wrong or is this supposed to be used differently? 
> > I am using dealii v.9.0.1 installed via candi, so maybe the old version 
> is the 
> > reason. 
>
> Possible -- no need to chase an old bug that has already been fixed if you 
> can 
> simply upgrade. 
>
>
> > Bonus question: 
> > Is there a way similar to hand the sparsity patterns over to the mmult 
> function? 
> > (For the dealii::SparseMatrix there is, which is why I am asking) 
>
> DynamicSparsityPattern has compute_mmult_pattern, which should give you a 
> sparsity pattern you can then use to initialize the resulting PETSc 
> matrix. 
>
> Best 
>   W. 
>
> -- 
> ------------------------------------------------------------------------ 
> Wolfgang Bangerth          email:                 bang...@colostate.edu 
> <javascript:> 
>                             www: http://www.math.colostate.edu/~bangerth/ 
>
>

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/dealii/0fa94c18-4eb6-49c0-8954-f96d5250479fo%40googlegroups.com.

Reply via email to