Re: [deal.II] Refine per direction

2022-08-17 Thread Uclus Heis
Dear Wolfgang, Thank you very much I could solve that. I would like to ask another question if it is ok. When I try to export the mesh, using a parallel::distributed::Triangulation with MPI, I am not able to export the whole mesh. What I get is the pieces of the mesh corresponding to a cert

[deal.II] Re: dealii and hdf5 compile problem

2022-08-17 Thread Praveen C
The detailed.log shows this #DEAL_II_WITH_HDF5 set up with external dependencies #HDF5_VERSION = 1.12.2 #HDF5_DIR = /Users/praveen/Applications/spack/opt/spack/darwin-monterey-m1/apple-clang-13.1.6/hdf5-1.12.2-gxrwbuzg3xom562obmqaqtu5forevio5/cmake #HDF

Re: [deal.II] Refine per direction

2022-08-17 Thread Daniel Arndt
Uclus, Use GridOut::write_vtu or GridOut::write_vtu_with_pvtu_record as demonstrated in step-40 instead. Best, Daniel On Wed, Aug 17, 2022 at 3:06 AM Uclus Heis wrote: > Dear Wolfgang, > > Thank you very much I could solve that. > > I would like to ask another question if it is ok. When I try

Re: [deal.II] get_generalized_support_points() returns only a vector of size 12 ?

2022-08-17 Thread Wolfgang Bangerth
I am debugging a program using the function *'get_generalized_support_points()' *( where**has_support_points()=0, while has_generalized_support_points()=1*)*. My FE system is defined as *'FESystem<3>        fe(FE_Nedelec<3>(0), 2);'*, therefore, each active cell has 12*2 dofs. So I would als

Re: [deal.II] Solving Step-74 by MPI

2022-08-17 Thread Wolfgang Bangerth
On 8/17/22 03:10, chong liu wrote: I modified Step-74 based on the error_estimation part of Step-50. I found it can work for the attached step-74-mpi, while it cannot work for the attached step-74-mpi-error. The only difference is the location of the output command as the attached figure 1 sh

[deal.II] Iterating over mesh cells in a custom order

2022-08-17 Thread Corbin Foucart
Hello everyone, I have a problem in which I'm propagating information downwards in depth by solving the same local finite element problem on each element in an adaptive grid. The only condition is that the cells above the current cell must have already been worked on. I'm looking for a way to

[deal.II] Re: Iterating over mesh cells in a custom order

2022-08-17 Thread Bruno Turcksin
Corbin, It's possible to do it using WorkStream::run (see here ) However, you need to create the ordering manually by "coloring" the cells. All the cells in the same color can be worked on

Re: [deal.II] Re: Iterating over mesh cells in a custom order

2022-08-17 Thread Wolfgang Bangerth
On 8/17/22 13:04, Bruno Turcksin wrote: It's possible to do it using WorkStream::run (see here

Re: [deal.II] Solving Step-74 by MPI

2022-08-17 Thread Timo Heister
For error computations using cellwise errors you can use VectorTools::compute_global_error(), which does the MPI communication for you: https://www.dealii.org/developer/doxygen/deal.II/namespaceVectorTools.html#a21eb62d70953182dcc2b15c4e14dd533 See step-55 for example. On Wed, Aug 17, 2022 at 1:4