On 9/9/19 1:57 AM, Richard Schussnig wrote:
>
> FINALLY, MY QUESTIONS:
>
> Using the Q1Q1, I would in the end (FSI) need to come up with a space made
> from Q1 elements with a discontinuity at the interface - which shall be
> realized using different material_id(). - how may I do that other than
On 9/9/19 3:10 PM, Konrad wrote:
>
> Thanks a lot, Wolfgang! The error message is actually quite clear. What
> I found out in addition is that it is advantageous and helpful to read
> the documentation ... ;-)
Yes, no doubt :-) We're spending a lot of time writing it ;-)
Cheers
W.
--
-
>
> Yes, this is indeed the case, and is what the error message was trying to
> suggest. If you think that the error message could have been clearer, feel
> free to propose a better wording and we'll include it for the next
> release!
>
Thanks a lot, Wolfgang! The error message is actually qu
> In solving a Laplace-Beltrami problem on an advecting surface one should
> identify the Gauss points on the manifold dim-1 cell and retrieve at
> such locations relevant information from the solution of the advection
> problem, using the dim dof_handler of the volume. I wonder how to
> conn
> I am trying to instantiate a Vector with an AD type such as Vector<
> Sacado::Fad::DFad > by changing
>
> for (SCALAR : REAL_AND_COMPLEX_SCALARS) to for (SCALAR : ALL_SCALAR_TYPES)
>
> in the instantiation file
>
> dealii/source/lac/la_parallel_vector.inst.in
This might work, but the schem
On 9/9/19 2:47 AM, Konrad wrote:
> OK, just to answer the question: If you are running with petsc you should not
> call
>
> dealii::Utilities::MPI::MPI_InitFinalize
> mpi_initialization(argc, argv, dealii::numbers::invalid_unsigned_int);
>
> since this will invoke threading if you start a number
Dear community
in case of use, I attempted to write a small code that seems to solve the
issue of linking manifold cells to volume cells.
As first, two maps are defined:
*// maps*
*// the map surface_to_volumefaces_mapping contains the outcome of the
method extract_boundary_mesh and conn
OK, just to answer the question: If you are running with petsc you should
not call
dealii::Utilities::MPI::MPI_InitFinalize
mpi_initialization(argc, argv, dealii::numbers::invalid_unsigned_int);
since this will invoke threading if you start a number of mpi processes
that is not equal the number
Hi everyone!
I am trying to implement the stabilizations presented in a paper by Bochev
et al. [2006],
which you may find here:
https://pdfs.semanticscholar.org/47be/4e317d4dcbbf1b70c781394e49c1dbf7e538.pdf
This one is parameter free, and they present local projections for both
Q1Q1 and Q1Q0
Hi Konrad!
You can use the parallel direct solver in the schur complement, for
orientation, take a look at step-57 (should be Navier-Stokes with direct
solver for the A-block, if im not mistaken).
However, my inferior C++ knowledge did not allow me to do the factorization
in the constructor of
10 matches
Mail list logo