Re: [deal.II] Artifacts at the refinement level boundary for Step 48

2017-09-08 Thread Stephen DeWitt
Dear Martin, Thank you for your quick reply. I tried as you suggested, and set the initial condition by computing a right hand side, and it lowered the difference between the initial condition and the first time step by about a factor of 5. So it helped, but didn't make a qualitative change. W

Re: [deal.II] Artifacts at the refinement level boundary for Step 48

2017-09-08 Thread Martin Kronbichler
Dear Stephen, At hanging nodes, there is definitely going to be a larger error due to the approximation of the diagonal mass matrix. I do not remember the exact details but to get a diagonal mass matrix you need to assume an interpolation in addition to the approximation leading to the mass l

Re: [deal.II] Output nodal scalar

2017-09-08 Thread Wolfgang Bangerth
On 09/07/2017 08:32 AM, Jie Cheng wrote: Thanks for explaining it. Now I got both StrainPostprocessor and StressPostprocessor working! But it seems a little wasteful to have two separate classes because we are not reusing the gradient of displacement computed in StrainPostprocessor. Although

[deal.II] Artifacts at the refinement level boundary for Step 48

2017-09-08 Thread Stephen DeWitt
Hello, I've been working on a phase field code using the matrix free approach from Step 48. When I use adaptive meshing, I notice some artifacts at the boundary between levels of refinement and I'm trying to understand their origin. For example, if I make the following modification to the "loca

Re: [deal.II] Re: Using LinearAlgebraTrilinos::MPI::Vector.l2_norm leads to an MPI_error for multiple nodes

2017-09-08 Thread 'Maxi Miller' via deal.II User Group
Now it fails at the line evaluation_point.add (alpha, newton_update); with the comment An error occurred in line <1927> of file in function void dealii::TrilinosWrappers::MPI::Vector::add(dealii::TrilinosScalar, const dealii::Tr

Re: [deal.II] Re: Using LinearAlgebraTrilinos::MPI::Vector.l2_norm leads to an MPI_error for multiple nodes

2017-09-08 Thread Bruno Turcksin
2017-09-08 8:35 GMT-04:00 'Maxi Miller' via deal.II User Group : > Nevertheless, it did not fix the problem... This won't fix any thing. I am just surprise that your code goes as far as the l2_norm(), I think that it should crash earlier. What happens if you comment these line for (unsigned int i

[deal.II] Re: Using LinearAlgebraTrilinos::MPI::Vector.l2_norm leads to an MPI_error for multiple nodes

2017-09-08 Thread 'Maxi Miller' via deal.II User Group
Running in debug mode -> make debug, or using deal in Debug configuration? If the former, it did not show anything for that. If the latter, I thought I compiled deal.II both in Release and Debug (ReleaseDebug in CMake). Nevertheless, it did not fix the problem... Thanks! Am Freitag, 8. September

[deal.II] Re: Using LinearAlgebraTrilinos::MPI::Vector.l2_norm leads to an MPI_error for multiple nodes

2017-09-08 Thread Bruno Turcksin
Hi, On Thursday, September 7, 2017 at 5:01:10 PM UTC-4, Maxi Miller wrote: > > > for (unsigned int i=0; i if (boundary_dofs[i] == true) > residual(i) = 0; > > This looks wrong. It shouldn't be residual(i) = 0; because there is only one residual(0) This code should throw in debug mode, are you

[deal.II] Re: Using LinearAlgebraTrilinos::MPI::Vector.l2_norm leads to an MPI_error for multiple nodes

2017-09-08 Thread 'Maxi Miller' via deal.II User Group
In addition: It fails when declaring the vector residual as global vector via LinearAlgebraTrilinos::MPI::Vector residual; IndexSet solution_relevant_partitioning(dof_handler.n_dofs()); DoFTools::extract_locally_relevant_dofs(dof_handler, solution_relevant_partitioning); residual.reinit(soluti