Dear Frederik, Sorry for taking a little time to get back to you. What you’ve mentioned in terms of a DiscontinuousQuadratureDataTransfer is exactly the concept that we laid out in this issue when we first implemented this class (see the last point in the check list). https://github.com/dealii/dealii/issues/2555 <https://github.com/dealii/dealii/issues/2555> In my opinion, inheriting a value from the closest QP (as if there are patches on a cell where the internal variable values are constant) is good first attempt to deal with this difficult situation. If you’re willing to try to implement it then I’d be happy to provide assistance if you need it.
Best regards, Jean-Paul > On 31 Oct 2017, at 12:08, 'Frederik S.' via deal.II User Group > <dealii@googlegroups.com> wrote: > > Hi Jean-Paul, > > Thank you for your answer! > I have a truly discontinuous case, the case you mentioned with internal yield > variables. Do you think it will be easier to use the method Timo mentioned > "from scratch" or will it be easier to copy the source files of > ContinuousQuadratureDataTransfer class and alter them to > DiscontinuousQuadratureDataTransfer with some kind of nearest neighbor > Interpolation? > Probably it will still be reasonable to use ContinuousQuadratureDataTransfer > for the continuous data I have. I yet don't absolutely understand which order > to use for the mass_quadrature element? data_quadrature is the quadrature > rule I'm using in my model and projection_fe has to be of an order high > enough to interpolate this data correctly, am I at least right there? > > Best regards, > Frederik > PS: Thanks, I forgot GeometryInfo at this point > > Am Freitag, 27. Oktober 2017 16:22:14 UTC+2 schrieb Jean-Paul Pelteret: > Hi Frederik, > > I’m being wildly presumptuous here (so feel free to say if I’ve presumed > incorrectly), but unless you have a special set of data that needs to be > considered then the ContinuousQuadratureDataTransfer > <http://www.google.com/url?q=http%3A%2F%2Fdealii.org%2F8.5.0%2Fdoxygen%2Fdeal.II%2Fclassparallel_1_1distributed_1_1ContinuousQuadratureDataTransfer.html&sa=D&sntz=1&usg=AFQjCNGU4L8iJy1Pk1RwdWvggZPQ5pR5Bw> > class may still be relevant to your use case. There is no requirement that > your global FE space be continuous to use this class. Whats relevant is this > snippet from the documentation: > > "the data located at data_quadrature of each cell is L2-projected to the > continuous space defined by a single FiniteElement projection_fe “ > > What happens is that, for each cell, the specified QP data is projected to > the finite element space specified by the input projection_fe, and then > interpolated onto the new cell (or children cells in the case of refinement) > of the refined triangulation. So I think that it might be possible that > you've misinterpreted is what exactly the “Continuous” part of the class name > refers to: It is assumed that for each individual cell the data is continuous > within the cell (i.e. can be directly interpolated using the input FE), while > the data between cells may not be. An example case where data is truly > discontinuous within a cell (and therefore where this class is not > applicable) would be the internal variables for yield or damage in > plasticity. Would you care to elaborate more on what you’re using this for? > > Best regards, > Jean-Paul > > P.S. You really want to use the GeometryInfo > <http://dealii.org/8.5.0/doxygen/deal.II/structGeometryInfo.html#af8c7865e177be882934e10c1b1a8a188> > class to get this value: > for (unsigned int child=0; child<8; child++){ > >> On 26 Oct 2017, at 18:50, 'Frederik S.' via deal.II User Group <dea...@ >> <>googlegroups.com <http://googlegroups.com/>> wrote: >> >> Hello! >> >> I've got a question on the usage of CellDataStorage for refined cells for a >> parallelized simulation. Below you'll find a sample of the code I use for >> refinement. >> >> I'm transferring the solution as stated in >> https://www.dealii.org/8.5.0/doxygen/deal.II/classparallel_1_1distributed_1_1SolutionTransfer.html >> >> <https://www.dealii.org/8.5.0/doxygen/deal.II/classparallel_1_1distributed_1_1SolutionTransfer.html> >> and I sadly can't use ContinuousQuadratureDataTransfer, because my CellData >> is discontinous. Therefore I'm iterating through all cells, look at each >> cells parent and copy the CellData of the parent to the child cells (the >> blue part below; in reality the copying process is much longer, but then the >> code would be unnecessarily complex for this example). If I run the code >> with just one cpu core, everything works perfectly fine, so in this example >> every child cell has it's parent's CellData. But for simulations with more >> than one core, I just copy zeros into the child cells CellData (but I do not >> get an runtime-error). >> >> Where did I make a mistake? Or is it just not possible to access parent >> cell's CellData for parallel meshes? >> >> Thanks in advance! >> >> -------------------------------------------------------- >> >> template <int dim> >> void TEST<dim>::refine_grid (){ >> const unsigned int n_q_points = quadrature_formula.size(); >> >> std::vector<const PETScWrappers::MPI::Vector *> solution_vectors (2); >> solution_vectors[0] = &solution; >> solution_vectors[1] = &old_solution; >> >> parallel::distributed::SolutionTransfer<dim, PETScWrappers::MPI::Vector >> > soltrans(dof_handler); >> >> typename >> parallel::distributed::Triangulation<dim>::active_cell_iterator cell = >> triangulation.begin_active(), endc = triangulation.end(); >> for (; cell!=endc; ++cell){ >> if (cell->subdomain_id() == this_mpi_process){ >> cell->set_refine_flag(); >> } >> } >> >> triangulation.prepare_coarsening_and_refinement(); >> soltrans.prepare_for_coarsening_and_refinement(solution_vectors); >> triangulation.execute_coarsening_and_refinement(); >> >> setup_system (0); >> >> typename >> parallel::distributed::Triangulation<dim>::active_cell_iterator cell2 = >> triangulation.begin_active(), endc2 = triangulation.end(); >> for (; cell2!=endc2; ++cell2){ >> point_history_accessor.initialize(cell2, n_q_points); >> } >> >> int highest_level_after_refinement=triangulation.n_global_levels()-1; >> >> PETScWrappers::MPI::Vector interpolated_solution(system_rhs); >> PETScWrappers::MPI::Vector interpolated_old_solution(system_rhs); >> std::vector<PETScWrappers::MPI::Vector *> tmp (2); >> tmp[0] = &(interpolated_solution); >> tmp[1] = &(interpolated_old_solution); >> soltrans.interpolate(tmp); >> >> cell2 = triangulation.begin_active(), endc2 = triangulation.end(); >> for (; cell2!=endc2; ++cell2) >> if(cell2->level()==highest_level_after_refinement){ >> typename >> parallel::distributed::Triangulation<dim>::cell_iterator >> cell_p=cell2->parent(); >> const std::vector<std::shared_ptr<PointHistory<dim> > > >> point_history_p = point_history_accessor.get_data(cell_p); >> >> for (unsigned int child=0; child<8; child++){ >> unsigned int q_point=0; >> std::vector<std::shared_ptr<PointHistory<dim> > >> > point_history = point_history_accessor.get_data(cell_p->child(child)); >> for (unsigned int >> q_point=0;q_point<n_q_points;q_point++){ >> >> point_history[q_point]->status=point_history_p[q_point]->status; >> } >> } >> } >> >> >> constraints.clear (); >> constraints.reinit (locally_relevant_dofs); >> DoFTools::make_hanging_node_constraints (dof_handler, constraints); >> constraints.close (); >> >> constraints.distribute(interpolated_solution); >> constraints.distribute(interpolated_old_solution); >> >> solution = interpolated_solution; >> old_solution = interpolated_old_solution; >> dof_handler.distribute_dofs (fe); >> } >> >> >> -- >> The deal.II project is located at http://www.dealii.org/ >> <http://www.dealii.org/> >> For mailing list/forum options, see >> https://groups.google.com/d/forum/dealii?hl=en >> <https://groups.google.com/d/forum/dealii?hl=en> >> --- >> You received this message because you are subscribed to the Google Groups >> "deal.II User Group" group. >> To unsubscribe from this group and stop receiving emails from it, send an >> email to dealii+un...@ <>googlegroups.com <http://googlegroups.com/>. >> For more options, visit https://groups.google.com/d/optout >> <https://groups.google.com/d/optout>. > > > -- > The deal.II project is located at http://www.dealii.org/ > <http://www.dealii.org/> > For mailing list/forum options, see > https://groups.google.com/d/forum/dealii?hl=en > <https://groups.google.com/d/forum/dealii?hl=en> > --- > You received this message because you are subscribed to the Google Groups > "deal.II User Group" group. > To unsubscribe from this group and stop receiving emails from it, send an > email to dealii+unsubscr...@googlegroups.com > <mailto:dealii+unsubscr...@googlegroups.com>. > For more options, visit https://groups.google.com/d/optout > <https://groups.google.com/d/optout>. -- The deal.II project is located at http://www.dealii.org/ For mailing list/forum options, see https://groups.google.com/d/forum/dealii?hl=en --- You received this message because you are subscribed to the Google Groups "deal.II User Group" group. To unsubscribe from this group and stop receiving emails from it, send an email to dealii+unsubscr...@googlegroups.com. For more options, visit https://groups.google.com/d/optout.