Hello! I've got a question on the usage of CellDataStorage for refined cells for a parallelized simulation. Below you'll find a sample of the code I use for refinement.
I'm transferring the solution as stated in https://www.dealii.org/8.5.0/doxygen/deal.II/classparallel_1_1distributed_1_1SolutionTransfer.html and I sadly can't use ContinuousQuadratureDataTransfer, because my CellData is discontinous. Therefore I'm iterating through all cells, look at each cells parent and copy the CellData of the parent to the child cells (the blue part below; in reality the copying process is much longer, but then the code would be unnecessarily complex for this example). If I run the code with just one cpu core, everything works perfectly fine, so in this example every child cell has it's parent's CellData. But for simulations with more than one core, I just copy zeros into the child cells CellData (but I do not get an runtime-error). Where did I make a mistake? Or is it just not possible to access parent cell's CellData for parallel meshes? Thanks in advance! -------------------------------------------------------- template <int dim> void TEST<dim>::refine_grid (){ const unsigned int n_q_points = quadrature_formula.size(); std::vector<const PETScWrappers::MPI::Vector *> solution_vectors (2); solution_vectors[0] = &solution; solution_vectors[1] = &old_solution; parallel::distributed::SolutionTransfer<dim, PETScWrappers::MPI::Vector > soltrans(dof_handler); typename parallel::distributed::Triangulation<dim>::active_cell_iterator cell = triangulation.begin_active(), endc = triangulation.end(); for (; cell!=endc; ++cell){ if (cell->subdomain_id() == this_mpi_process){ cell->set_refine_flag(); } } triangulation.prepare_coarsening_and_refinement(); soltrans.prepare_for_coarsening_and_refinement(solution_vectors); triangulation.execute_coarsening_and_refinement(); setup_system (0); typename parallel::distributed::Triangulation<dim>::active_cell_iterator cell2 = triangulation.begin_active(), endc2 = triangulation.end(); for (; cell2!=endc2; ++cell2){ point_history_accessor.initialize(cell2, n_q_points); } int highest_level_after_refinement=triangulation.n_global_levels()-1; PETScWrappers::MPI::Vector interpolated_solution(system_rhs); PETScWrappers::MPI::Vector interpolated_old_solution(system_rhs); std::vector<PETScWrappers::MPI::Vector *> tmp (2); tmp[0] = &(interpolated_solution); tmp[1] = &(interpolated_old_solution); soltrans.interpolate(tmp); * cell2 = triangulation.begin_active(), endc2 = triangulation.end(); * * for (; cell2!=endc2; ++cell2)* * if(cell2->level()==highest_level_after_refinement){* * typename parallel::distributed::Triangulation<dim>::cell_iterator cell_p=cell2->parent(); * * const std::vector<std::shared_ptr<PointHistory<dim> > > point_history_p = point_history_accessor.get_data(cell_p);* * for (unsigned int child=0; child<8; child++){* * unsigned int q_point=0;* * std::vector<std::shared_ptr<PointHistory<dim> > > point_history = point_history_accessor.get_data(cell_p->child(child));* * for (unsigned int q_point=0;q_point<n_q_points;q_point++){* * point_history[q_point]->status=point_history_p[q_point]->status;* * }* * }* * }* constraints.clear (); constraints.reinit (locally_relevant_dofs); DoFTools::make_hanging_node_constraints (dof_handler, constraints); constraints.close (); constraints.distribute(interpolated_solution); constraints.distribute(interpolated_old_solution); solution = interpolated_solution; old_solution = interpolated_old_solution; dof_handler.distribute_dofs (fe); } -- The deal.II project is located at http://www.dealii.org/ For mailing list/forum options, see https://groups.google.com/d/forum/dealii?hl=en --- You received this message because you are subscribed to the Google Groups "deal.II User Group" group. To unsubscribe from this group and stop receiving emails from it, send an email to dealii+unsubscr...@googlegroups.com. For more options, visit https://groups.google.com/d/optout.