On 9/27/22 23:50, Rahul Gopalan Ramachandran wrote:
Thank you for the clarification. Isend should be the way to patch this.
Alternatively, what is your opinion on copying the matrix to a single processor
(if that is possible) using MPI_Gather? Would the cell iterators then no longer
work? Ignoring scalability will this also be an option?
At least in principle, of course, we'd like to avoid writing programs
that we know can't scale because each process stores data replicated
everywhere -- like the entire matrix. In practice, if your goal is to
run on 10 or 20 processes, this may still work, though you should
recognize that the system matrix is the largest object you probably have
in your program (even if you fully distribute it).
If your goal is to replicate the matrix on every process, it might be
easiest to create a data structure that collects all of the local
contributions (local matrix, plus the dof indices array) and send that
around between processes, which then all build their own matrix. That
may be simpler than sending around the matrix because the latter is a
PETSc object to which you don't easily have access to its internal
representation.
Best
W.
--
------------------------------------------------------------------------
Wolfgang Bangerth email: bange...@colostate.edu
www: http://www.math.colostate.edu/~bangerth/
--
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see
https://groups.google.com/d/forum/dealii?hl=en
---
You received this message because you are subscribed to the Google Groups "deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email
to dealii+unsubscr...@googlegroups.com.
To view this discussion on the web visit
https://groups.google.com/d/msgid/dealii/eb258ba9-a221-5229-17e5-61453b639cf6%40colostate.edu.