Thank you, Timo. Your remarks have been very useful. It turned out that I made a mistake in the way the mesh was prepared, specifically some hanging nodes were not properly dealt with. This caused also a related issue, that I shared here some time ago (Nov. 25).
This leads to another question, that I take the opportunity to ask. Suppose that a run is too long for a time slot allocated on a large scale computer. In such a case, one wants to restart the computations from a given time. To this aim, the history of the computations up to a certain time shall be stored. All data stored in cell->user_pointer(), data of the loads and mesh. How can one store information on the latter, - as for the partition and the hanging nodes - correctly? I understand that saving the mesh in a "ucd" or alternative form may not be the right strategy. Thank you very much. Alberto *Alberto Salvadori* Dipartimento di Ingegneria Civile, Architettura, Territorio, Ambiente e di Matematica (DICATAM) Universita` di Brescia, via Branze 43, 25123 Brescia Italy tel 030 3711239 fax 030 3711312 e-mail: alberto.salvad...@unibs.it web-pages: http://m4lab.unibs.it/faculty.html http://dicata.ing.unibs.it/salvadori On Fri, Jan 19, 2018 at 3:39 PM, Timo Heister <heis...@clemson.edu> wrote: > > in the code and re-implemented it. In serial version, all works fine so > far. > > However, when running in parallel, I am seeing an issue in the method > > PlasticityContactProblem::update_solution_and_constraints. > > > > In particular, it turns out that the value of > > > > const unsigned int index_z = dof_indices[q_point]; > > > > might be out of the range of > > If you do a loop over all locally owned and locally relevant cells > than all dof values of a ghosted vector should exist. If you see an > error, something else must be incorrect (like the IndexSets). > > > PETScWrappers::MPI::Vector lambda( this->locally_relevant_dofs, > this->mpi_communicator); > > This looks suspicious. Does this really create a ghosted vector in > PETSc? I thought this would fail (at least in debug mode). > > Finally, it looks like you modified it to only look at locally owned > cells to build constraints. The problem with this is that processors > also need to know about constraints on ghost cells, not only locally > owned cells. You no longer compute them, which means the solution > might become incorrect around processor boundaries. It probably > (hopefully?) works without adaptivity because each locally owned DoF > is within at least one locally owned cell, but imagine a case where a > dof on a ghost cells is constrained and interacts with a hanging node > the current processor owns. You will not handle this case correctly. > > I don't quite remember if there is an easy way to do this, but I > remember writing a debug function that checks if a ConstraintMatrix is > consistent in parallel. This was a while back, but I can try to find > it. > > -- > Timo Heister > http://www.math.clemson.edu/~heister/ > > -- > The deal.II project is located at http://www.dealii.org/ > For mailing list/forum options, see https://groups.google.com/d/ > forum/dealii?hl=en > --- > You received this message because you are subscribed to the Google Groups > "deal.II User Group" group. > To unsubscribe from this group and stop receiving emails from it, send an > email to dealii+unsubscr...@googlegroups.com. > For more options, visit https://groups.google.com/d/optout. > -- Informativa sulla Privacy: http://www.unibs.it/node/8155 -- The deal.II project is located at http://www.dealii.org/ For mailing list/forum options, see https://groups.google.com/d/forum/dealii?hl=en --- You received this message because you are subscribed to the Google Groups "deal.II User Group" group. To unsubscribe from this group and stop receiving emails from it, send an email to dealii+unsubscr...@googlegroups.com. For more options, visit https://groups.google.com/d/optout.