traints,
false);
SparsityTools::distribute_sparsity_pattern(dsp,
dof_handler.n_locally_owned_dofs_per_processor(), MPI_COMM_WORLD,
info.locally_relevant);
system_matrix.reinit(info.locally_owned, info.locally_owned, dsp,
MPI_COMM_WORLD);
I am a bit clueless on where to look for the error, so any suggestions are
welcome.
Best
Hi,
- Debug is enabled (at least for dealii, I will have to rebuild trilinos
with debug later)
- I am not sure if I got you correctly, but If I use a regular
Triangulation, then every rank owns all dofs and finally the initialization
of the distributed vectors fails (as expected)
What I additi
Ok, if I use SolverGMRES<>, it reports the error "Column map of matrix does
not fit with vector map!", however, TrilinosWrappers::SolverGMRES seems to
work.
--
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see
https://groups.google.com/d/forum/dealii
Yes, the initialization is done as above and after the renumbering.
--
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see
https://groups.google.com/d/forum/dealii?hl=en
---
You received this message because you are subscribed to the Google Groups
"deal
Here it is.
First run is without renumbering, second one with dof renumbering.
In serial mode (mpirun -np 1) both tests complete (just the error from the
Subscriptor class).
For mpirun -np 2 the first one finishes while the second one fails.
Now it returns an error message that did not occur pr
I think DoFRenumbering::Cuthill_McKee(dof_handler) does the renumbering
only on the locally owned dofs, therefore these index sets wont change.
--
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see
https://groups.google.com/d/forum/dealii?hl=en
---
You
Sounds like we ran into the same problem
https://github.com/dealii/dealii/pull/2536, although is also experienced
this issue in serial.
--
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see
https://groups.google.com/d/forum/dealii?hl=en
---
You receiv
Actually Mumps is included in the Amesos solver used by
TrilinosWrappers::SolverDirect("Amesos_Mumps"). You may have to recompile
Trilinos with the corresponding flags to enable it (and probably deal.II as
well).
Am Donnerstag, 9. Februar 2017 20:52:16 UTC+1 schrieb Bruno Turcksin:
>
> 2017-02-
SparseDirectUMFPACK creates a copy of the matrix using iterators, which are
not implemented for the BlockMatrixArray, so these would have to be added
(and some minor other functions used within the factorize method).
Alternatively, one could implement the factorize method for
BlockMatrixArray se
I encountered similar funny bugs in the past. It was usually one or more of
- read/write out of bounds
- race condition or other multithread effects, e.g. interaction between
TBB/OpenMP/Threads/Blas/...)
- dangling reference (mainly with clang)
- compiler bug
(somewhat in decreasing order of like
Dear all!
I'm trying to use the MUMPS direct solver via
PetscWrappers::SparseDirectMUMPS but I keep running into a minor issue.
When calling the solve function, it always throws an error, i.e takes the
wrong path
in petsc_solver.cc:680
void SparseDirectMUMPS::solve()
{
#ifdef PETSC_HAVE_MUMPS
Indeed, there was a previous installation on my system configured without
Mumps which I have not noticed so far.
It seems that during installation of deal.II, it compiled against this
version instead of the one installed via candi, despite showing the correct
include/library paths during cmake.
I think the old petsc headers are in /usr/include/, libs in /usr/lib64/ as
far as I remember (will check tomorrow).
That doesn't make sense. Eclipse will just run "make". Or are you
> referring to the syntax highlighting?
>
I use the internal builder from Eclipse, and entered all library & inc
Ok, I'm going to talk to our admin to remove the old petsc (seems less
troublesome than configuring it now and not being able to change that later
easily).
Thanks for your help,
Daniel
--
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see
https://gr
Dear all!
To verify my MatrixFree implementation, I compared its application to the
classical matrix-vector multiplication (call it matrix *A*).
This is done by computing the matrix *M* of the operator *MF* by applying
it to all unit-vectors.
However, when I compute the diagonal in the same way
But thats exactly my point, the error occurs in the dofs which constrain a
hanging node, not the hanging nodes (dofs 5 and 7) itself. I agree that the
constrained dofs 5 and 7 can have arbitrary values.
I will check whether the solution is going to be different in any case. I
was just afraid tha
The paper
>
> K. Kormann: A Time-Space Adaptive Method for the Schrödinger Equation,
> Commun. Comput. Phys. 2016, doi: 10.4208/cicp.101214.021015a
>
> describes this effect in section 5.3. Can you check there if this is what
> you see?
>
> Best,
> Martin
>
> On 0
Ran into the same problem. The bundled folder is not installed, since no
bundled libraries are used, but the cmake targets still reference them.
As a workaround, you can just create the bundled folder.
I haven't had the time to dig through the cmake files to fix it, but for
the moment opened an i
18 matches
Mail list logo