[deal.II] global dof renumbering with parallel triangulation and trilinos direct solver (Bug ?)

2016-08-08 Thread Daniel Jodlbauer
Hi all !

I am trying to renumber the degrees of freedom globally using code like 
this:

   vector new_number(dof_handler.n_dofs());
   for (unsigned int i = 0; i < dof_handler.n_dofs(); i++)
  new_number[i] = dof_handler.n_dofs() - i - 1; // simple example

   vector local_new_number;
   for (unsigned int dof : info.locally_owned)
  local_new_number.push_back(new_number[dof]);

   dof_handler.renumber_dofs(local_new_number);

   info.locally_owned = dof_handler.locally_owned_dofs();
   DoFTools::extract_locally_relevant_dofs(dof_handler, 
info.locally_relevant);

with a dofhandler built upon a parallel::shared::Triangulation.

However, this seems to break the solution of TrilinosWrappers::SolverDirect:

   LA::MPI::Vector tmp_newton, tmp_rhs;

   tmp_newton.reinit(info.locally_owned, MPI_COMM_WORLD);
   tmp_rhs.reinit(info.locally_owned, MPI_COMM_WORLD);

   tmp_newton = newton_update;
   tmp_rhs = system_rhs;

   solver.solve(system_matrix, tmp_newton, tmp_rhs);

   cout << fmt::format("[{:d}] mat = {:e}", rank, system_matrix.l1_norm()) 
<< endl;
   cout << fmt::format("[{:d}] rhs = {:e}", rank, tmp_rhs.l2_norm()) << 
endl;
   cout << fmt::format("[{:d}] sol = {:e}", rank, tmp_newton.l2_norm()) << 
endl;

which returns 0 for for the solution of the linear system (the other two 
values are the same as without the renumbering step).

Also, I am not sure if I set up the vectors and matrices correct:
   solution.reinit(info.locally_owned, info.locally_relevant, 
MPI_COMM_WORLD); // ghosted for fe_values
   old_timestep_solution.reinit(info.locally_owned, info.locally_relevant, 
MPI_COMM_WORLD); // same as solution
   newton_update.reinit(info.locally_owned, info.locally_relevant, 
MPI_COMM_WORLD); // ghosted, bc of solution += newton_update

   system_rhs.reinit(info.locally_owned, MPI_COMM_WORLD); // ghosted / 
non-ghosted ?

   DynamicSparsityPattern dsp(info.locally_relevant);
   DoFTools::make_flux_sparsity_pattern(dof_handler, dsp, constraints, 
false); 

   SparsityTools::distribute_sparsity_pattern(dsp, 
dof_handler.n_locally_owned_dofs_per_processor(), MPI_COMM_WORLD, 
info.locally_relevant);

   system_matrix.reinit(info.locally_owned, info.locally_owned, dsp, 
MPI_COMM_WORLD);


I am a bit clueless on where to look for the error, so any suggestions are 
welcome.


Best regards

Daniel Jodlbauer

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[deal.II] Re: global dof renumbering with parallel triangulation and trilinos direct solver (Bug ?)

2016-08-08 Thread Daniel Jodlbauer
Hi,

- Debug is enabled (at least for dealii, I will have to rebuild trilinos 
with debug later)
- I am not sure if I got you correctly, but If I use a regular 
Triangulation, then every rank owns all dofs and finally the initialization 
of the distributed vectors fails (as expected)

What I additionally tried is: (with 2 ranks)

1) assemble the rhs / matrix in serial,
2) create a partition by hand: [0, n/2), [n/2, n)
3) copy/distribute
3) solve in parallel

which works.

However, when I changed the partition into something like 
 { 0, 2, 4, ... } , { 1, 3, 5, ... } 
it fails, which makes me believe that non-contiguous partitions are not 
(completely) supported by dealii or trilinos.

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[deal.II] Re: global dof renumbering with parallel triangulation and trilinos direct solver (Bug ?)

2016-08-10 Thread Daniel Jodlbauer
Ok, if I use SolverGMRES<>, it reports the error "Column map of matrix does 
not fit with vector map!", however, TrilinosWrappers::SolverGMRES seems to 
work. 

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[deal.II] Re: global dof renumbering with parallel triangulation and trilinos direct solver (Bug ?)

2016-08-10 Thread Daniel Jodlbauer
Yes, the initialization is done as above and after the renumbering.

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[deal.II] Re: global dof renumbering with parallel triangulation and trilinos direct solver (Bug ?)

2016-08-10 Thread Daniel Jodlbauer
Here it is.

First run is without renumbering, second one with dof renumbering.

In serial mode (mpirun -np 1) both tests complete (just the error from the 
Subscriptor class).
For mpirun -np 2 the first one finishes while the second one fails.

Now it returns an error message that did not occur previously, but I cant 
extract any useful information from that (Trilinos solver returns error 
code -1 in file "trilinos_solver.h" line 485).

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.
#include 
#include 
#include 
#include 
#include 
#include 
#include 
#include 
#include 
#include 
#include 
#include 
#include 
#include 
#include 
#include 
#include 
#include 
#include 

#include 

#include 
#include 

using std::vector;
using std::cout;
using std::endl;

using namespace dealii;

class Test
{
   public:
  unsigned int rank;
  unsigned int n_ranks;

  parallel::shared::Triangulation<2> triangulation;

  DoFHandler<2> dof_handler;

  FE_Q<2> fe;
  QGauss<2> quadrature;
  FEValues<2> fe_values;

  ConstraintMatrix constraints;

  IndexSet locally_owned_dofs;
  IndexSet locally_relevant_dofs;

  TrilinosWrappers::SparseMatrix system_matrix;
  TrilinosWrappers::MPI::Vector system_rhs, solution;

  Test(const bool do_renumber) :
rank(Utilities::MPI::this_mpi_process(MPI_COMM_WORLD)),  //
n_ranks(Utilities::MPI::n_mpi_processes(MPI_COMM_WORLD)),

triangulation(MPI_COMM_WORLD), dof_handler(triangulation), fe(1), quadrature(2),  //
fe_values(fe, quadrature, update_gradients | update_values | update_JxW_values)
  {
 cout << "Start";

 if (do_renumber)
cout << " with renumbering" << endl;
 else
cout << " without renumbering" << endl;

 GridGenerator::hyper_cube(triangulation);
 triangulation.refine_global(4);

 dof_handler.distribute_dofs(fe);

 constraints.clear();
 constraints.close();

 if (do_renumber) renumber();

 init_structures();

 assemble();

 solve();

 cout << "Finished";

 if (do_renumber)
cout << " with renumbering" << endl;
 else
cout << " without renumbering" << endl;
  }

  void init_structures()
  {
 locally_owned_dofs = dof_handler.locally_owned_dofs();
 DoFTools::extract_locally_relevant_dofs(dof_handler, locally_relevant_dofs);

 solution.reinit(locally_owned_dofs, locally_relevant_dofs, MPI_COMM_WORLD);

 system_rhs.reinit(locally_owned_dofs, MPI_COMM_WORLD);

 DynamicSparsityPattern dsp(dof_handler.n_dofs(), dof_handler.n_dofs());  //(locally_relevant_dofs);
 DoFTools::make_sparsity_pattern(dof_handler, dsp, constraints, false);

 SparsityTools::distribute_sparsity_pattern(dsp, dof_handler.n_locally_owned_dofs_per_processor(), MPI_COMM_WORLD, locally_relevant_dofs);

 system_matrix.reinit(locally_owned_dofs, locally_owned_dofs, dsp, MPI_COMM_WORLD);
  }

  void renumber()
  {
 locally_owned_dofs = dof_handler.locally_owned_dofs();
 DoFTools::extract_locally_relevant_dofs(dof_handler, locally_relevant_dofs);

 vector new_number(dof_handler.n_dofs());
 for (unsigned int i = 0; i < dof_handler.n_dofs(); i++)
new_number[i] = dof_handler.n_dofs() - i - 1;

 vector local_new_number;
 for (unsigned int dof : locally_owned_dofs)
local_new_number.push_back(new_number[dof]);

 dof_handler.renumber_dofs(local_new_number);
  }

  void assemble()
  {
 for (auto cell = dof_handler.begin_active(); cell != dof_handler.end(); ++cell)
 {
if ( !cell->is_locally_owned()) continue;

fe_values.reinit(cell);

Vector local_rhs(fe.dofs_per_cell);
local_rhs = 0;

FullMatrix local_matrix(fe.dofs_per_cell, fe.dofs_per_cell);
local_matrix = 0;

for (unsigned int q = 0; q < fe_values.n_quadrature_points; q++)
{
   for (unsigned int i = 0; i < fe.dofs_per_cell; i++)
   {
  for (unsigned int j = 0; j < fe.dofs_per_cell; j++)
  {
 local_matrix(i, j) += fe_values.shape_value(i, q) * fe_values.shape_value(j, q) * fe_values.JxW(q);
  }

  local_rhs(i) += (fe_values.shape_value(i, q) * fe_values.JxW(q));
   }
}

vector local_dofs(fe.dofs_per_cell);
   

[deal.II] Re: global dof renumbering with parallel triangulation and trilinos direct solver (Bug ?)

2016-08-10 Thread Daniel Jodlbauer
I think DoFRenumbering::Cuthill_McKee(dof_handler) does the renumbering 
only on the locally owned dofs, therefore these index sets wont change.

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[deal.II] Re: Trilinos SparseMatrix mmult bug

2016-09-21 Thread Daniel Jodlbauer
Sounds like we ran into the same problem 
https://github.com/dealii/dealii/pull/2536, although is also experienced 
this issue in serial.

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [deal.II] Re: Renumbering dofs with petsc + block + MPI + Direct solver work around

2017-02-09 Thread Daniel Jodlbauer
Actually Mumps is included in the Amesos solver used by 
TrilinosWrappers::SolverDirect("Amesos_Mumps"). You may have to recompile 
Trilinos with the corresponding flags to enable it (and probably deal.II as 
well).

Am Donnerstag, 9. Februar 2017 20:52:16 UTC+1 schrieb Bruno Turcksin:
>
> 2017-02-09 14:33 GMT-05:00 Spencer Patty >: 
>
> > Interesting,  I wondered if SuperLU_dist might be parallel but I hadn't 
> > looked into it yet.  If it does work, then that certainly makes things 
> much 
> > simpler since I have trilinos integrated well.  I will look into 
> installing 
> > it and see if it will work. I see what you mean by it not being the 
> easiest 
> > code to install. 
> > 
> > Once it is installed, I then have to link it into trilinos?  Then it is 
> > available as an option in AdditionalData. 
> Yes, deal.II just passes the options to Amesos. You need to install 
> parmetis, SuperLU_dist, and then finally Trilinos. I would encourage 
> you to use candi or spack to install deal with SuperLU_dist supports. 
> If you want to install everything yourself, you can take a look at 
> candi to see how to install SuperLU_dist and enable it in Trilinos. 
>
> Best, 
>
> Bruno 
>

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [deal.II] Extract some blocks from a sparse block matrix

2017-03-01 Thread Daniel Jodlbauer
SparseDirectUMFPACK creates a copy of the matrix using iterators, which are 
not implemented for the BlockMatrixArray, so these would have to be added 
(and some minor other functions used within the factorize method).
Alternatively, one could implement the factorize method for 
BlockMatrixArray separately, which is probably easier.

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [deal.II] writing data to a file changes program output although this file is never used

2023-03-30 Thread Daniel Jodlbauer
I encountered similar funny bugs in the past. It was usually one or more of 
- read/write out of bounds
- race condition or other multithread effects, e.g. interaction between 
TBB/OpenMP/Threads/Blas/...)
- dangling reference (mainly with clang)
- compiler bug
(somewhat in decreasing order of likelihood).

I would try to
- disable parallelization as much as possible
- run in release mode instead of debug: may give some hint if it is a race 
condition / timing issue
- use asan/tsan: I've found the sanitizers more reliable than valgrind, in 
particular for multithreaded code, although I haven't used those with 
deal.II (requires recompilation)
- debug both variants in parallel and check for differences

Some more desparate attempts:
- try different compiler
- set variables to some fixed values here and there
- output to stringstream instead of file
- insert sleep/wait commands
(although probably none of these really helps to identify the problem)

You can also define _GLIBCXX_ASSERTIONS to enable bound checks for 
operator[] in libstdc++.

Good luck!

bruno.t...@gmail.com schrieb am Donnerstag, 30. März 2023 um 20:18:31 UTC+2:

> Hello,
>
> Usually when I have this kind of bug, there are two possibilities: 
>  1. I am using an un-initialized value
>  2. I am writing out of bound
>
> What I do is using valgrind with my code in Debug mode and without TBB 
> enabled otherwise you get difficult to understand backtrace like here. 
> However, what I've also encountered is that in Debug mode, the compiler 
> would initialize the variable but it wouldn't do it in Release... So make 
> sure that all the variables are initialized and when using a std::vector, 
> use at() instead of operator[]. Only use at() in Debug mode because it is 
> much slower than operator[]. Other than that rerun valgrind without TBB 
> enabled, you should get something more meaningful.
>
> Best,
>
> Bruno
>
>
> On Wednesday, March 29, 2023 at 12:41:24 PM UTC-4 Simon wrote:
>
>> Too early pleased -- the issue still persists.
>>
>> As described above, adding the reference symbol & to JxW and global_index 
>> makes a difference in my program, but is not the solution. 
>>
>> I ran my program in debug mode using
>> valgrind --tool=memcheck --leak-check=full ./my_program
>>
>> The valgrind output associated with
>> void fun(const Tensor<2,3 & F,
>>const double JxW,
>>const unsigned int global_index,
>>std::vector> & sensitivity_matrix)
>> {
>> // use F and JxW to write some values into sensitivity_matrix, which 
>> is passed by reference since it is a large matrix
>> if(print_or_not==true)  std::cout<<"print anything here..."<> }
>> is 
>> [image: memcheck_NotOutC.png]
>>
>> The valgrind output associated with
>> void fun(const Tensor<2,3 & F,
>>const double JxW,
>>const unsigned int global_index,
>>std::vector> & sensitivity_matrix)
>> {
>> // use F and JxW to write some values into sensitivity_matrix, which 
>> is passed by reference since it is a large matrix
>> // if(print_or_not==true)  std::cout<<"print anything 
>> here..."<> }
>> [image: memcheck_outC.png]
>>
>> The differences are "only" possibly lost messages, but the backtraces are 
>> not really clear to me.
>> Do you see any problems with my code by inspecting the memory check?
>>
>> Based on your experience, I would appreciate your feedback regarding how 
>> to debug my problem further.
>> Adding print statements clearly is not really helpful if my program shows 
>> undefined behavior and 
>> if the print statement itself is what causes differences.
>>
>> Thank you!
>>
>> On Wednesday, March 29, 2023 at 3:02:52 PM UTC+2 Simon wrote:
>>
>> Of course, Approach B is the right way to go if I want to produce 
>> reliable results.
>>
>> After plenty of hours of debugging, I found a possible source for the 
>> differences:
>> The member function called by my assembly routine has the following 
>> signature
>> void fun(const Tensor<2,3 & F,
>>const double JxW,
>>const unsigned int global_index,
>>std::vector> & sensitivity_matrix)
>> {
>> //use F and JxW to write some values into sensitivity_matrix at 
>> global_index
>> }
>>
>> To debug my problem, I wrapped the << statement into an if-condition that 
>> I controlled via my parameter file:
>> {
>> // use F and JxW to write some values into sensitivity_matrix, which 
>> is passed by reference since it is a large matrix
>> if(print_or_not==true)  std::cout<<"print anything here..."<> }
>> Interestingly, setting print_or_not==false or just out-commenting the 
>> line changes some results of my program.
>> In other words, the output obtained by
>> {
>> // use F and JxW to write some values into sensitivity_matrix, which 
>> is passed by reference since it is a large matrix
>> if(false)  std::cout<<"print anything here..."<> }
>> is different from the output obtained b

[deal.II] PETSC_HAVE_MUMPS not set correctly ?

2017-08-30 Thread Daniel Jodlbauer
Dear all!

I'm trying to use the MUMPS direct solver via 
PetscWrappers::SparseDirectMUMPS but I keep running into a minor issue. 
When calling the solve function, it always throws an error, i.e takes the 
wrong path

in petsc_solver.cc:680

void SparseDirectMUMPS::solve()
{
#ifdef PETSC_HAVE_MUMPS
... do solving stuff
... is never going to be called when inside the cc file :(
#else
   Assert(mumps not found)
#endif


It seems the PETSC_HAVE_MUMPS flag was not recognized during compilation of 
deal.II. When I copy the solving code into the header "petsc_solver.h", 
everything works, so PETSc seems to be configured correctly with MUMPS.

Do I have to set a specific flag during installation to enable petsc+mumps 
in deal.II? (btw it works through TrilinosWrappers::SolverDirect and 
selecting MUMPS)

Since I am not too familiar with the whole cmake stuff, I have no idea how 
to fix / find the error in the installation (except for copying the code to 
the header file).

I also tried including "petscconf.h" (where PETSC_HAVE_MUMPS is defined) 
inside the cc, but it did not resolve the issue.


Best regards

Daniel

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [deal.II] PETSC_HAVE_MUMPS not set correctly ?

2017-08-31 Thread Daniel Jodlbauer
Indeed, there was a previous installation on my system configured without 
Mumps which I have not noticed so far.

It seems that during installation of deal.II, it compiled against this 
version instead of the one installed via candi, despite showing the correct 
include/library paths during cmake. When compiling my code (using eclipse), 
it choses the correct version.

Since I have no root access, I cannot remove the old version of petsc. Is 
it possible to somehow force deal.II to use the correct Petsc installation?
(PETSC_DIR/INCLUDE/LIB ... are all pointing towards the correct 
installation, but still it prefers the one in /usr/include/)


Thanks for pointing me into the right direction!

Best regards,

Daniel

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [deal.II] PETSC_HAVE_MUMPS not set correctly ?

2017-08-31 Thread Daniel Jodlbauer
I think the old petsc headers are in /usr/include/, libs in /usr/lib64/ as 
far as I remember (will check tomorrow).

That doesn't make sense. Eclipse will just run "make". Or are you 
> referring to the syntax highlighting? 
>

I use the internal builder from Eclipse, and entered all library & include 
paths for deal/petsc/trilinos/... in the settings. My projects compile and 
also use code inside the PETSC_HAVE_MUMPS flags.

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [deal.II] PETSC_HAVE_MUMPS not set correctly ?

2017-09-01 Thread Daniel Jodlbauer
Ok, I'm going to talk to our admin to remove the old petsc (seems less 
troublesome than configuring it now and not being able to change that later 
easily).


Thanks for your help,

Daniel

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[deal.II] Compute diagonal with Matrix-Free on adaptive meshes

2018-05-02 Thread Daniel Jodlbauer
Dear all!

To verify my MatrixFree implementation, I compared its application to the 
classical matrix-vector multiplication (call it matrix *A*).
This is done by computing the matrix *M* of the operator *MF* by applying 
it to all unit-vectors.

However, when I compute the diagonal in the same way as LaplaceOperator 
does it (copied it), I get values different from the assembled diagonal 
once I have hanging nodes.
This error only occurs in the compute_diagonal function, i.e surprisingly 
*M* == *A, *but A(i,i) != compute_diagonal(i) (if hanging nodes are present)

My test-case starts with a 2x2 grid and refines one cell; apart from 
hanging node constraints no other constraints are active.

Some more details:

Constraints: (these look suspicious as well, shouldn't there be 4 
constrained dofs instead of 2?)
5 4:  0.5
5 8:  0.5
7 6:  0.5
7 8:  0.5

Error occurs at (ignoring 5 and 7, since they are constrained)

dof | M, A | diag
 4  | 1.5  |  2.0
 6  | 1.5  |  2.0
 8  | 2.8  |  4.0

I will provide a mwe tomorrow, but maybe someone else has an idea already.


Thanks

Daniel

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [deal.II] Compute diagonal with Matrix-Free on adaptive meshes

2018-05-02 Thread Daniel Jodlbauer
But thats exactly my point, the error occurs in the dofs which constrain a 
hanging node, not the hanging nodes (dofs 5 and 7) itself. I agree that the 
constrained dofs 5 and 7 can have arbitrary values.
I will check whether the solution is going to be different in any case. I 
was just afraid that wrong values on the diagonal could cause problems in 
the MGSmoother afterwards.

On Wednesday, May 2, 2018 at 7:38:29 PM UTC+2, Wolfgang Bangerth wrote:
>
>
> Daniel, 
>
> > To verify my MatrixFree implementation, I compared its application to 
> > the classical matrix-vector multiplication (call it matrix *A*). 
> > This is done by computing the matrix *M* of the operator *MF* by 
> > applying it to all unit-vectors. 
> > 
> > However, when I compute the diagonal in the same way as LaplaceOperator 
> > does it (copied it), I get values different from the assembled diagonal 
> > once I have hanging nodes. 
>
> The rows and columns corresponding to hanging nodes are empty with the 
> exception of the diagonal entry -- for which we use a value that has the 
> correct order of magnitude, but that is otherwise unspecified. In other 
> words, what diagonal value you have there is unimportant as long as it 
> is nonzero because these degrees of freedom don't couple with all of the 
> other degrees of freedom, and as long as you overwrite the values of the 
> computed solution through ConstraintMatrix::distribute(). 
>
> It is not surprising to me that you get different diagonal entries when 
> you use two different methods. The question is whether you get a 
> different *solution* vector after distributing to hanging nodes. 
>
> Best 
>   W. 
>
> -- 
>  
> Wolfgang Bangerth  email: bang...@colostate.edu 
>  
> www: http://www.math.colostate.edu/~bangerth/ 
>

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [deal.II] Compute diagonal with Matrix-Free on adaptive meshes

2018-05-03 Thread Daniel Jodlbauer
The effect described in the paper looks indeed similar, thanks for the hint.


Am Mittwoch, 2. Mai 2018 20:34:08 UTC+2 schrieb Martin Kronbichler:
>
> Dear Daniel,
>
> the problem is, as far as I can tell, the fact that once you assemble into 
> a matrix and once into a vector. The paper
>
> K. Kormann: A Time-Space Adaptive Method for the Schrödinger Equation, 
> Commun. Comput. Phys. 2016, doi: 10.4208/cicp.101214.021015a
>
> describes this effect in section 5.3. Can you check there if this is what 
> you see?
>
> Best,
> Martin
>
> On 02.05.2018 20:17, Daniel Jodlbauer wrote:
>
> But thats exactly my point, the error occurs in the dofs which constrain a 
> hanging node, not the hanging nodes (dofs 5 and 7) itself. I agree that the 
> constrained dofs 5 and 7 can have arbitrary values. 
> I will check whether the solution is going to be different in any case. I 
> was just afraid that wrong values on the diagonal could cause problems in 
> the MGSmoother afterwards.
>
> On Wednesday, May 2, 2018 at 7:38:29 PM UTC+2, Wolfgang Bangerth wrote: 
>>
>>
>> Daniel, 
>>
>> > To verify my MatrixFree implementation, I compared its application to 
>> > the classical matrix-vector multiplication (call it matrix *A*). 
>> > This is done by computing the matrix *M* of the operator *MF* by 
>> > applying it to all unit-vectors. 
>> > 
>> > However, when I compute the diagonal in the same way as LaplaceOperator 
>> > does it (copied it), I get values different from the assembled diagonal 
>> > once I have hanging nodes. 
>>
>> The rows and columns corresponding to hanging nodes are empty with the 
>> exception of the diagonal entry -- for which we use a value that has the 
>> correct order of magnitude, but that is otherwise unspecified. In other 
>> words, what diagonal value you have there is unimportant as long as it 
>> is nonzero because these degrees of freedom don't couple with all of the 
>> other degrees of freedom, and as long as you overwrite the values of the 
>> computed solution through ConstraintMatrix::distribute(). 
>>
>> It is not surprising to me that you get different diagonal entries when 
>> you use two different methods. The question is whether you get a 
>> different *solution* vector after distributing to hanging nodes. 
>>
>> Best 
>>   W. 
>>
>> -- 
>>  
>> Wolfgang Bangerth  email: bang...@colostate.edu 
>> www: http://www.math.colostate.edu/~bangerth/ 
>>
> -- 
> The deal.II project is located at http://www.dealii.org/
> For mailing list/forum options, see 
> https://groups.google.com/d/forum/dealii?hl=en
> --- 
> You received this message because you are subscribed to the Google Groups 
> "deal.II User Group" group.
> To unsubscribe from this group and stop receiving emails from it, send an 
> email to dealii+un...@googlegroups.com .
> For more options, visit https://groups.google.com/d/optout.
>
>
>

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [deal.II] Re: Can't configure with cmake

2024-02-07 Thread &#x27;Daniel Jodlbauer' via deal.II User Group
Ran into the same problem. The bundled folder is not installed, since no 
bundled libraries are used, but the cmake targets still reference them.
As a workaround, you can just create the bundled folder.
I haven't had the time to dig through the cmake files to fix it, but for 
the moment opened an issue here 
https://github.com/dealii/dealii/issues/16605


bruno.t...@gmail.com schrieb am Mittwoch, 7. Februar 2024 um 14:47:22 UTC+1:

> Sean,
>
> If the configuration steps errors out, there is no point in trying to 
> compile the code. You probably want to remove the deal.II files that have 
> been installed and reinstall the library.
>
> Best,
>
> Bruno
>
> Le mar. 6 févr. 2024 à 18:26, Sean Johnson  a écrit :
>
>> Bruno,
>>
>> I apparently spoke to soon. It passed "make test" with 0 failed test. 
>> However, when I run "cmake ." in the directory of step-1 I get this output:
>>
>>
>> -- Using the deal.II-9.5.2 installation found at /usr/local
>> -- Include macro 
>> /usr/local/share/deal.II/macros/macro_deal_ii_add_test.cmake
>> -- Include macro 
>> /usr/local/share/deal.II/macros/macro_deal_ii_initialize_cached_variables.cmake
>> -- Include macro 
>> /usr/local/share/deal.II/macros/macro_deal_ii_invoke_autopilot.cmake
>> -- Include macro 
>> /usr/local/share/deal.II/macros/macro_deal_ii_pickup_tests.cmake
>> -- Include macro 
>> /usr/local/share/deal.II/macros/macro_deal_ii_query_git_information.cmake
>> -- Include macro 
>> /usr/local/share/deal.II/macros/macro_deal_ii_setup_target.cmake
>> -- Include macro 
>> /usr/local/share/deal.II/macros/macro_shell_escape_option_groups.cmake
>> -- Include macro 
>> /usr/local/share/deal.II/macros/macro_target_compile_flags.cmake
>> -- Include macro 
>> /usr/local/share/deal.II/macros/macro_target_link_flags.cmake
>> -- Autopilot invoked
>> -- Run   $ make info  to print a detailed help message
>> -- Configuring done
>> CMake Error in CMakeLists.txt:
>>   Imported target "dealii::dealii_debug" includes non-existent path
>>
>> "/usr/local/include/deal.II/bundled"
>>
>>   in its INTERFACE_INCLUDE_DIRECTORIES.  Possible reasons include:
>>
>>   * The path was deleted, renamed, or moved to another location.
>>
>>   * An install or uninstall procedure did not complete successfully.
>>
>>   * The installation package was faulty and references files it does not
>>   provide.
>>
>>
>>
>> -- Generating done
>> CMake Generate step failed.  Build files cannot be regenerated correctly.
>>
>>
>>
>> Then when I try "make" I get this:
>>
>>
>>
>> [ 33%] Building CXX object CMakeFiles/step-1.dir/step-1.cc.o
>> In file included from /usr/local/include/deal.II/grid/tria.h:20,
>>  from 
>> /home/sean/dealii-9.5.2/examples/step-1/step-1.cc:22:
>> /usr/local/include/deal.II/base/config.h:588:12: fatal error: mpi.h: No 
>> such file or directory
>>   588 | #  include 
>>   |^~~
>> compilation terminated.
>> make[3]: *** [CMakeFiles/step-1.dir/build.make:76: 
>> CMakeFiles/step-1.dir/step-1.cc.o] Error 1
>> make[2]: *** [CMakeFiles/Makefile2:90: CMakeFiles/step-1.dir/all] Error 2
>> make[1]: *** [CMakeFiles/Makefile2:123: CMakeFiles/run.dir/rule] Error 2
>> make: *** [Makefile:137: run] Error 2
>>
>> I am currently recompiling with fewer jobs to see if any errors pop up or 
>> if recompiling might help.
>>
>> Thanks,
>> Sean
>>
>> On Tuesday, February 6, 2024 at 3:17:42 PM UTC-7 bruno.t...@gmail.com 
>> wrote:
>>
>>> Sean,
>>>
>>> That's great to hear. We don't mark problems as solved. So there is 
>>> nothing to do.
>>>
>>> Best,
>>>
>>> Bruno
>>>
>>> Le mar. 6 févr. 2024 à 16:41, Sean Johnson  a 
>>> écrit :
>>>
 Thanks again!

 The using a newer version of Boost helped me get further and realize I 
 made another bone headed mistake of compiling kokkos as a static library.

 I am all compiled and passed all tests now.

 Thanks for your help and let me know if I have to do anything for this 
 to be marked as solved.

 Best,
 Sean

 On Tuesday, February 6, 2024 at 8:30:19 AM UTC-7 bruno.t...@gmail.com 
 wrote:

> Sean,
>
> I am not sure what's the issue. It's probably not the issue but make 
> sure that mpi is using nvcc_wrapper as the underlying compiler. If you 
> use 
> OpenMPI, you can use export `OMPI_CXX=nvcc_wrapper`. You could also try 
> to 
> use a newer version of Boost. The bundled version is pretty old. Since 
> you 
> are using Ubuntu 22.04, you can just use your package manager to install 
> a 
> newer version of boost.
>
> Best,
>
> Bruno
>
> Le mar. 6 févr. 2024 à 09:37, Sean Johnson  a 
> écrit :
>
>> Thanks,
>>
>> I tried lowering it down to 2 and it looks like two of them were 
>> caught a lot earlier. Now I don't make it past 6%. I erased everything 
>> from 
>> before and recompiled the cmake files and ran with just 2 jobs and again 
>> it 
>> stuck in the