Hello,

I develop distributed code like in step-40 and with the adaptive
mesh refinement like in step-42.
I was using ghosted Trilinos MPI vectors like

  LA::MPI::Vector solution_vel_n;


initialized by

solution_vel_n.reinit (locally_owned_dofs_vel,
                       locally_relevant_dofs_vel,
                       MPI_COMM_WORLD);

I was able to assign to them from the output of solve
procedures in the following manner
(like in step-40   
https://www.dealii.org/8.5.0/doxygen/deal.II/step_40.html#LaplaceProblemsolve  
)

    solution_vel_n = completely_distributed_solution;

where

LA::MPI::Vector completely_distributed_solution (locally_owned_dofs_vel,
    MPI_COMM_WORLD);

"GMRES_solver()"

    constraint_matrix_vel.distribute (completely_distributed_solution);



I am running fixed point iteration. I was successful in convergence on not 
refined mesh, to be more specific on meshes without adaptive refinement. 
After the adaptive refinement the convergence of residual stalled.

I have tried therefore to mimic the step-42 as closely as possible
https://www.dealii.org/8.5.0/doxygen/deal.II/step_42.html#PlasticityContactProblemrefine_grid

here is the following piece of code

if (transfer_solution)
{
TrilinosWrappers::MPI::Vector 
<https://www.dealii.org/8.5.0/doxygen/deal.II/classTrilinosWrappers_1_1MPI_1_1Vector.html>
 
distributed_solution(locally_owned_dofs, mpi_communicator);
solution_transfer.interpolate(distributed_solution);

constraints_hanging_nodes.distribute(distributed_solution);
solution = distributed_solution;
...

Where solution is TrilinosWrappers::MPI::Vector 
<https://www.dealii.org/8.5.0/doxygen/deal.II/classTrilinosWrappers_1_1MPI_1_1Vector.html>
          
initialized by

solution.reinit 
<https://www.dealii.org/8.5.0/doxygen/deal.II/classVector.html#ac4a4dbef7dd65ef8ad35ae56b57d7c05>(locally_relevant_dofs,
 
mpi_communicator);






My refinement related code:




parallel::distributed::SolutionTransfer<dim, LA::MPI::Vector> sol_trans_vel 
(
dof_handler_vel);

 sol_trans_vel.prepare_for_coarsening_and_refinement (
solution_vel_n);

triang.execute_coarsening_and_refinement ();

nsSystem.setupSystem (solution_vel_n);             ------->here i 
re-distribute dofs, resize the vector solution_vel_n, matrix and 
                                                                                
    
rhs, apply bc, rebuild constraint matrix

 LA::MPI::Vector distributed_solution_vel (locally_owned_dofs_vel,
  MPI_COMM_WORLD);

    sol_trans_vel.interpolate (distributed_solution_vel);

constraint_matrix_vel.distribute (distributed_solution_vel);

solution_vel_n_helper.reinit (locally_relevant_dofs_vel,
                       MPI_COMM_WORLD);
 solution_vel_n_helper = distributed_solution_vel;

solution_vel_n =  solution_vel_n_helper;

















I tried to do this in my code, however I have almost all data exchange in 
the
form of distributed ghosted  TrilinosWrappers::MPI::Vector 
<https://www.dealii.org/8.5.0/doxygen/deal.II/classTrilinosWrappers_1_1MPI_1_1Vector.html>
 
.
from which I need to compute sometimes l2_norm.
I tried this (it works for non-refined case)

    parallel::distributed::Vector<double> vel (locally_owned_dofs_vel,
                           locally_relevant_dofs_vel,
                           MPI_COMM_WORLD);
    vel = crate.solution_vel_n;
    this->pcout << "    NORM tentative velocity  :" << vel.l2_norm()
    << std::endl;




I got this error







--------------------------------------------------------
An error occurred in line <1099> of file 
</home/mcapek/candis/candi_8_5/tmp/unpack/deal.II-v8.5.0/include/deal.II/lac/trilinos_vector_base.h>
 
in function
    dealii::IndexSet 
dealii::TrilinosWrappers::VectorBase::locally_owned_elements() const
The violated condition was: 
    owned_elements.size()==size()
Additional information: 
    The locally owned elements have not been properly initialized! This 
happens for example if this object has been initialized with exactly one 
overlapping IndexSet.

Stacktrace:
-----------
#0  /home/mcapek/candis/candi_8_5/deal.II-v8.5.0/lib/libdeal_II.g.so.8.5.0: 
dealii::TrilinosWrappers::VectorBase::locally_owned_elements() const
#1  /home/mcapek/candis/candi_8_5/deal.II-v8.5.0/lib/libdeal_II.g.so.8.5.0: 
dealii::LinearAlgebra::ReadWriteVector<double>::import(dealii::TrilinosWrappers::MPI::Vector
 
const&, dealii::VectorOperation::values, 
std::shared_ptr<dealii::LinearAlgebra::CommunicationPatternBase const>)
#2  /home/mcapek/candis/candi_8_5/deal.II-v8.5.0/lib/libdeal_II.g.so.8.5.0: 
dealii::LinearAlgebra::distributed::Vector<double>::operator=(dealii::TrilinosWrappers::MPI::Vector
 
const&)
#3  ./main: NSSystem<3>::assemble_and_solve_system(SolutionCrate&, 
SolutionCrate&, double)
#4  ./main: NSSystem<3>::compute_solution(SolutionCrate&, SolutionCrate&, 
double, double)
#5  ./main: NSSystem<3>::compute_solution_get_dt(SolutionCrate&, 
SolutionCrate&, double, double)
#6  ./main: Main<3>::run()
#7  ./main: main
--------------------------------



Could You tell me please, how to compute the norm from the ghosted vector.
Maybe I did some false initialization in the procedure of refinement. 
However, when I want to get values
(get_function_values() call) from the already interpolated vector in 
assembly dealii does not complain.
Maybe could You recommend me some alternative to exchange of data to 
ghosted vectors?


Thank You


MareK Capek

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Reply via email to