On Tuesday, May 23, 2017 at 5:22:44 PM UTC+2, Juan Carlos Araujo Cabarcas
wrote:
>
> Dear all,
>
> There are features in SLEPc 3.7 that I am interested in, and will try to
> make a fresh re-installation soon.
>
> I have been using deal.II 8.3 with complex arithmetic’s from the branch:
> git
Hi Juan Carlos,
Have a look at this issue https://github.com/dealii/dealii/issues/2033
you will see some "open" issues linked to it, which are known limitations
when using complex arithmetics.
The main one has to do with constraints. Currently we only allow
real-valued constraints (Dirichlet BC
Hi Prof. Wolfgang,
Many thanks!
Originally, I suspected that when solving two linear systems simultaneously
by two threads would reduce time. But now it seems that this idea increases
the complexity of communication between MPI communicators and coding, and
also not surely to decrease
Juan Carlos,
There are features in SLEPc 3.7 that I am interested in, and will try to
make a fresh re-installation soon.
I have been using deal.II 8.3 with complex arithmetic’s from the branch:
git checkout branch_petscscalar_complex
and I wonder about the state of that branch, or if it
Jack,
“The way to do this is to clone the MPI communicator you use for your
overall
problem once for each linear system. ”
That means for my problem I have to copy the Vector and Matrix of one
linear system(either the thermal diffusion or stokes flow) to another
Vector and Matrix which are
Jason,
2017-05-23 11:47 GMT-04:00 :
> Test project /home/software/PRISMS-PF/dealii-8.5.0/build/tests/quick_tests
> Start 1: step.debug
> 1/8 Test # 1: step.debug ... *** Failed 43.52 sec
> ../../lib/libdeal_II.g.so.8.5.0: undefined reference to `tbb :: interface5
> :: inte
Dear all,
There are features in SLEPc 3.7 that I am interested in, and will try to
make a fresh re-installation soon.
I have been using deal.II 8.3 with complex arithmetic’s from the branch:
git checkout branch_petscscalar_complex
and I wonder about the state of that branch, or if it has be
Hi Prof. Wolfgang,
Thanks so much!
“The way to do this is to clone the MPI communicator you use for your
overall
problem once for each linear system. ”
That means for my problem I have to copy the Vector and Matrix of one
linear system(either the thermal diffusion or stokes flow) to anoth