Jonathan,
You may have figured this out already -- you might have forgotten to add
hanging_node_constraints.distribute (localized_solution);
after you solve your system (see step-17 for example).
Artur
--
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options,
Jonathan,
Sorry, you're right -- the distribute() function is not the culprit here.
I remember having the exact same issue a while ago, but don't quite
remember what caused it. Can you post your program here?
Artur
--
The deal.II project is located at http://www.dealii.org/
For mailing list/f
Jonathan,
My understanding of the situation is this: whenever you output a solution
in parallel, each processor loops over all the cells that it owns and
stores the associated DoFs. On a cell that a processor owns, it is possible
that not all the DoFs will belong to said processor. So to output
Hello,
I am working on solving a Helmholtz-type problem, where I am using the PML
method to simulate absorbing boundary conditions. Originally (before adding
PML) I was using the complex GAMG preconditioner from the PETSc package,
which was working fine. But now I am having a difficulty where t
Wolfgang,
I have convergence to the correct solution for very coarse meshes (I have
an analytic free-space solution that I can compare to), and I have also
been able to solve the problem with a direct solver. The issue of
non-convergence crops up when I use a finer mesh, and as a result, have a
Zhenlin,
I remember having trouble getting deal2 to compile against the available
versions of trilinos. In the end, I just ended up installing both myself.
If you want, I can post my installation steps.
Rgds,
Artur
--
The deal.II project is located at http://www.dealii.org/
For mailing list/
Zhenlin,
Before installation, I had loaded modules intel/15.0.2 and cxx11, and was
using mvapich2/2.1.
For Trilinos, here is the cmake command:
cmake \
-D Trilinos_ENABLE_CXX11=ON \
-D Trilinos_ENABLE_Sacado=ON \
-D Trilinos_ENABLE_MueLu:BOOL=ON \
-D Trilinos_ENABLE_Stratimikos=ON \
-D CMAKE_BU
Zhenlin,
The petsc configuration I posted was for complex numbers, which is enabled
with --with-scalar-type=complex. If you have not removed this option, then
it makes sense that you get the error - the data_out classes do not yet
support complex numbers (https://github.com/dealii/dealii/issues
Hello,
I am implementing a sequence of two simulations using a global mesh that
contains both a fluid and a structure. My first simulation (elasticity
equation) involves only the structure subdomain, and so I intend to run it
on the structure submesh (by extracting it from the global mesh using
Daniel, Wolfgang,
Thank you for the answers!
Out of curiosity, there is support for saving and loading refinement using
*save_refine_flags*() and *load_refine_flags*() in the triangulation class.
Am I correct to assume that these functions are not designed to work with a
p::d::triangulation?
Dear community,
In the near future, I will need to solve a curl-curl problem ∇×(∇×u)=u. I
will have to use an unstructured grid (gmsh-generated) that is also
adaptively refined. From what it seems, the appropriate class for such
problems (Nedelec_FE) is incomplete, and so thought I'd ask as to
Dear dealii community,
I have a fluid-structure interaction type of problem to solve, where on one
domain I have the linearized compressible Navier-Stokes equations, and on
the other - equation of motion (elastic deformation). The PDEs that I have
are time-harmonic, so there is no time dependen
12 matches
Mail list logo