>
> Can you narrow down where the time is lost?

These two lines

 added_sp.block(0, 1).copy_from(col); 
 added_sp.block(1, 0).copy_from(row);

account for about 60% of the bottleneck (considering only the runtime for 
the piece of code in the original post). The other 40% of the bottleneck is 
from the line

added_Jacobian_matrix.block(1, 0).reinit(
single,
partitioning,
added_sp.block(1, 0),
MPI_COMM_WORLD,
false);

Interestingly, the reinit for the block(0,1) matrix is negligible. I assume 
this is related to block(1,0) living on a single processor, and block(0,1) 
being distributed. All the other lines have negligible contribution to the 
runtime.


> The linear solver itself averages about 40 seconds per call. 
>
> To solve the entire system? 


 Yes, when I say one call of the linear solver, what I mean is one complete 
solve of the entire block system using deal.ii's GMRES.


What do you actually later put into the (1,0) and (0,1) blocks of the 
> matrix? The row and col vectors you build above, which are just ones? 


Later, I assemble two TrilinosWrappers::MPI::Vector, like one would for 
assembling a rhs. I fill the (1,0) and (0,1) blocks with the entries of 
these vectors. That step is fast, and takes about 0.01 seconds.

The reason I fill the row and col vectors with ones (correct me if I'm 
misunderstanding) is so the sparsity pattern will have non-zero entries. 
The vectors I use to fill the (0,1) and (1,0) matrices are dense, so this 
seemed like the safest choice.

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Reply via email to