[deal.II] global dof renumbering with parallel triangulation and trilinos direct solver (Bug ?)

2016-08-08 Thread Daniel Jodlbauer
traints, false); SparsityTools::distribute_sparsity_pattern(dsp, dof_handler.n_locally_owned_dofs_per_processor(), MPI_COMM_WORLD, info.locally_relevant); system_matrix.reinit(info.locally_owned, info.locally_owned, dsp, MPI_COMM_WORLD); I am a bit clueless on where to look for the error, so any suggestions are welcome. Best

[deal.II] Re: global dof renumbering with parallel triangulation and trilinos direct solver (Bug ?)

2016-08-08 Thread Daniel Jodlbauer
Hi, - Debug is enabled (at least for dealii, I will have to rebuild trilinos with debug later) - I am not sure if I got you correctly, but If I use a regular Triangulation, then every rank owns all dofs and finally the initialization of the distributed vectors fails (as expected) What I additi

[deal.II] Re: global dof renumbering with parallel triangulation and trilinos direct solver (Bug ?)

2016-08-10 Thread Daniel Jodlbauer
Ok, if I use SolverGMRES<>, it reports the error "Column map of matrix does not fit with vector map!", however, TrilinosWrappers::SolverGMRES seems to work. -- The deal.II project is located at http://www.dealii.org/ For mailing list/forum options, see https://groups.google.com/d/forum/dealii

[deal.II] Re: global dof renumbering with parallel triangulation and trilinos direct solver (Bug ?)

2016-08-10 Thread Daniel Jodlbauer
Yes, the initialization is done as above and after the renumbering. -- The deal.II project is located at http://www.dealii.org/ For mailing list/forum options, see https://groups.google.com/d/forum/dealii?hl=en --- You received this message because you are subscribed to the Google Groups "deal

[deal.II] Re: global dof renumbering with parallel triangulation and trilinos direct solver (Bug ?)

2016-08-10 Thread Daniel Jodlbauer
Here it is. First run is without renumbering, second one with dof renumbering. In serial mode (mpirun -np 1) both tests complete (just the error from the Subscriptor class). For mpirun -np 2 the first one finishes while the second one fails. Now it returns an error message that did not occur pr

[deal.II] Re: global dof renumbering with parallel triangulation and trilinos direct solver (Bug ?)

2016-08-10 Thread Daniel Jodlbauer
I think DoFRenumbering::Cuthill_McKee(dof_handler) does the renumbering only on the locally owned dofs, therefore these index sets wont change. -- The deal.II project is located at http://www.dealii.org/ For mailing list/forum options, see https://groups.google.com/d/forum/dealii?hl=en --- You

[deal.II] Re: Trilinos SparseMatrix mmult bug

2016-09-21 Thread Daniel Jodlbauer
Sounds like we ran into the same problem https://github.com/dealii/dealii/pull/2536, although is also experienced this issue in serial. -- The deal.II project is located at http://www.dealii.org/ For mailing list/forum options, see https://groups.google.com/d/forum/dealii?hl=en --- You receiv

Re: [deal.II] Re: Renumbering dofs with petsc + block + MPI + Direct solver work around

2017-02-09 Thread Daniel Jodlbauer
Actually Mumps is included in the Amesos solver used by TrilinosWrappers::SolverDirect("Amesos_Mumps"). You may have to recompile Trilinos with the corresponding flags to enable it (and probably deal.II as well). Am Donnerstag, 9. Februar 2017 20:52:16 UTC+1 schrieb Bruno Turcksin: > > 2017-02-

Re: [deal.II] Extract some blocks from a sparse block matrix

2017-03-01 Thread Daniel Jodlbauer
SparseDirectUMFPACK creates a copy of the matrix using iterators, which are not implemented for the BlockMatrixArray, so these would have to be added (and some minor other functions used within the factorize method). Alternatively, one could implement the factorize method for BlockMatrixArray se

Re: [deal.II] writing data to a file changes program output although this file is never used

2023-03-30 Thread Daniel Jodlbauer
I encountered similar funny bugs in the past. It was usually one or more of - read/write out of bounds - race condition or other multithread effects, e.g. interaction between TBB/OpenMP/Threads/Blas/...) - dangling reference (mainly with clang) - compiler bug (somewhat in decreasing order of like

[deal.II] PETSC_HAVE_MUMPS not set correctly ?

2017-08-30 Thread Daniel Jodlbauer
Dear all! I'm trying to use the MUMPS direct solver via PetscWrappers::SparseDirectMUMPS but I keep running into a minor issue. When calling the solve function, it always throws an error, i.e takes the wrong path in petsc_solver.cc:680 void SparseDirectMUMPS::solve() { #ifdef PETSC_HAVE_MUMPS

Re: [deal.II] PETSC_HAVE_MUMPS not set correctly ?

2017-08-31 Thread Daniel Jodlbauer
Indeed, there was a previous installation on my system configured without Mumps which I have not noticed so far. It seems that during installation of deal.II, it compiled against this version instead of the one installed via candi, despite showing the correct include/library paths during cmake.

Re: [deal.II] PETSC_HAVE_MUMPS not set correctly ?

2017-08-31 Thread Daniel Jodlbauer
I think the old petsc headers are in /usr/include/, libs in /usr/lib64/ as far as I remember (will check tomorrow). That doesn't make sense. Eclipse will just run "make". Or are you > referring to the syntax highlighting? > I use the internal builder from Eclipse, and entered all library & inc

Re: [deal.II] PETSC_HAVE_MUMPS not set correctly ?

2017-09-01 Thread Daniel Jodlbauer
Ok, I'm going to talk to our admin to remove the old petsc (seems less troublesome than configuring it now and not being able to change that later easily). Thanks for your help, Daniel -- The deal.II project is located at http://www.dealii.org/ For mailing list/forum options, see https://gr

[deal.II] Compute diagonal with Matrix-Free on adaptive meshes

2018-05-02 Thread Daniel Jodlbauer
Dear all! To verify my MatrixFree implementation, I compared its application to the classical matrix-vector multiplication (call it matrix *A*). This is done by computing the matrix *M* of the operator *MF* by applying it to all unit-vectors. However, when I compute the diagonal in the same way

Re: [deal.II] Compute diagonal with Matrix-Free on adaptive meshes

2018-05-02 Thread Daniel Jodlbauer
But thats exactly my point, the error occurs in the dofs which constrain a hanging node, not the hanging nodes (dofs 5 and 7) itself. I agree that the constrained dofs 5 and 7 can have arbitrary values. I will check whether the solution is going to be different in any case. I was just afraid tha

Re: [deal.II] Compute diagonal with Matrix-Free on adaptive meshes

2018-05-03 Thread Daniel Jodlbauer
The paper > > K. Kormann: A Time-Space Adaptive Method for the Schrödinger Equation, > Commun. Comput. Phys. 2016, doi: 10.4208/cicp.101214.021015a > > describes this effect in section 5.3. Can you check there if this is what > you see? > > Best, > Martin > > On 0

Re: [deal.II] Re: Can't configure with cmake

2024-02-07 Thread &#x27;Daniel Jodlbauer' via deal.II User Group
Ran into the same problem. The bundled folder is not installed, since no bundled libraries are used, but the cmake targets still reference them. As a workaround, you can just create the bundled folder. I haven't had the time to dig through the cmake files to fix it, but for the moment opened an i