Hi again,
I took a look at the modified mwe you sent -- immediately noticing that you
read though the whole thing,
which is amazing, considering that this is a google group!
So, thanks again for your time and effort, it is really appreciated!
I have been busy doing the following with our littl
Hi Wolfgang,
thanks a ton for taking a look at my silly example!
- It seems like I am now finally really at a point where an update is
inevitable! ; )
... maybe just rewriting my codes to fit a newer version would have saved
me loads of time in the long run,
but I am happy that remains a mistery
h."
)
ENDIF()
DEAL_II_INITIALIZE_CACHED_VARIABLES()
PROJECT(${TARGET})
DEAL_II_INVOKE_AUTOPILOT()
/*
This code is licensed under the "GNU GPL version 2 or later". See
license.txt or https://www.gnu.org/licenses/gpl-2.0.html
Copyright 2019: Richard Schussnig
*/
// Include
>
> Hi again,
it seems my last comment got lost, but I wanted to post it here for others
researching that issue:
I constructed a MWE and found out, that actually solving the system via
GMRES, CG or BiCGstab
using any of the implementations provided by PETSc, Trilinos or the dealii
versions th
Hi again,
Great to hear that you were able to construct a minimal working example &
pinpoint the error location,
that is already of great help, but please do share the MWE you have
constructed!
I can also confirm, that this behaviour described in the previous posts
does not(!) occur when runni
Hi Alberto,
I might be having a similar or even the same problem with petsc! In my case,
the memory accumulated is proportional to the number of iterations done in the
SolverFGMRES solver. Also, when using trilinos (switch between petsc and
trilinos see step 40 I believe), this does not(!) happe
Hi Chen,
I successfully imported gmsh-generated meshes using the .msh file format once:
Try to set the physical ids in gmsh to get the right boundary_id and
material_id in dealii and then export as version 2 ascii .msh and untick the
box [export all elements]. This was working for dealii v 9.0.1
Hi David,
Great to hear that, I am a fan of open-source software development in our
field and appreciate the work of the preCICE project!
The tutorial steps you might want to look at are 8,18,24,44,46 & 62; the
code gallery also offers quite some interesting codes, among which
https://dealii.or
Dear Wolfgang, dear Bruno,
Thank you very much again for your time & effort!
I think one should leave grad-div stabilization out of the discussion,
since it is only used as a counter-measure. It has nothing to do with
inf-sup stability,
but is rather used to additionally penalize violations of
Hi everyone,
I am back with good & bad news!
- Good news first, I implemented a parallel direct solver (mumps via petsc)
and going from a 2x2 blocked system to
a regular one (actually 1x1 still blocked system)
one just needs to adapt the setup phase, not reordering per component & not
use the "g
Dear Bruno,
thanks again for bothering!
I will try to do that, but it is a bit involved, since I was setting up
block-matrices, which i need to change!
For pure Dirichlet, enclosed flow problems I did the same as for step-55f,
which is basically nothing
(the iterative solver handles the pressur
rescaling of some rows!
I will report back, if that helped!
Kind regards,
Richard
Am Mittwoch, 19. Februar 2020 03:03:57 UTC+1 schrieb Wolfgang Bangerth:
>
> On 2/14/20 5:34 AM, Richard Schussnig wrote:
> >
> > Could the observed behaviour be caused by applying an iterative so
re aligned with the axis of the system.
> Are you solving it using Newton's method?
>
> Best
> Bruno
>
>
> On Friday, 14 February 2020 07:34:09 UTC-5, Richard Schussnig wrote:
>>
>> Hi everyone!
>> I am currently implementing some stationary Stokes sol
Hi everyone!
I am currently implementing some stationary Stokes solvers based on step-55.
Therein, Taylor-Hood elements are being used. One can check the optimal
order of
convergence easily comparing to the Kovasznay or Poisuille flow solution,
the first one being already implemented in this ste
Am Dienstag, 10. September 2019 13:26:56 UTC+2 schrieb Bruno Blais:
>
> I second Wolfgang comment on the fact that Q1Q1 is not difficult to
> implement. You can also scale it to arbitrary Qn-Qn elements if you are
> interested in higher order.
> We have implemented such an approach in our code
Am Dienstag, 10. September 2019 13:26:56 UTC+2 schrieb Bruno Blais:
>
> I second Wolfgang comment on the fact that Q1Q1 is not difficult to
> implement. You can also scale it to arbitrary Qn-Qn elements if you are
> interested in higher order.
> We have implemented such an approach in our code ba
UTC+2 schrieb Wolfgang Bangerth:
>
> On 9/9/19 1:57 AM, Richard Schussnig wrote:
> >
> > FINALLY, MY QUESTIONS:
> >
> > Using the Q1Q1, I would in the end (FSI) need to come up with a space
> made
> > from Q1 elements with a discontinuity at the interfa
Hi everyone!
I am trying to implement the stabilizations presented in a paper by Bochev
et al. [2006],
which you may find here:
https://pdfs.semanticscholar.org/47be/4e317d4dcbbf1b70c781394e49c1dbf7e538.pdf
This one is parameter free, and they present local projections for both
Q1Q1 and Q1Q0
Hi Konrad!
You can use the parallel direct solver in the schur complement, for
orientation, take a look at step-57 (should be Navier-Stokes with direct
solver for the A-block, if im not mistaken).
However, my inferior C++ knowledge did not allow me to do the factorization
in the constructor of
Hi Pham!
>From your description I do not really get why you are specifically doing
this, so maybe consider the following:
I assume, you are flagging cells material ids on one locally owned part due
to some custom condition - lets say some stress or function,
you cannot formulate in the global co
20 matches
Mail list logo