For error computations using cellwise errors you can use
VectorTools::compute_global_error(), which does the MPI communication
for you:
https://www.dealii.org/developer/doxygen/deal.II/namespaceVectorTools.html#a21eb62d70953182dcc2b15c4e14dd533
See step-55 for example.
On Wed, Aug 17, 2022 at 1:4
On 8/17/22 03:10, chong liu wrote:
I modified Step-74 based on the error_estimation part of Step-50. I found it
can work for the attached step-74-mpi, while it cannot work for the attached
step-74-mpi-error. The only difference is the location of the output command
as the attached figure 1 sh
On 8/13/22 20:36, chong liu wrote:
Thank you for your reply. The link you shared is extremely helpful. I will try
to extend Step-74 based on the ideas in Step-50.
If you make that work, it would actually be quite nice to have that as a code
gallery program! Feel free to submit it as such (an
Hello Timo Heister,
Thank you for your reply. The link you shared is extremely helpful. I will try
to extend Step-74 based on the ideas in Step-50.
Best,
Chong
On 14 Aug 2022, 12:29 AM +0800, Timo Heister , wrote:
> Hi Chong,
>
> MeshWorker does work without much effort in MPI parallel code and
Hi Chong,
MeshWorker does work without much effort in MPI parallel code and is
made to help with exactly this (who assembles what is non-trivial if
you have hanging nodes and processor boundaries). The only thing you
have to watch out for is supplying the right flags that determine the
cells and f