Re: [deal.II] Re: Assemble function, long time

2022-03-14 Thread Bruno Turcksin
Hermes, Sorry, I don't use petsc. Maybe someone else can help you. Best, Bruno Le lun. 14 mars 2022 à 05:42, Hermes Sampedro a écrit : > Dear Bruno, > > I have been reading the examples and documents you pointed out. I tried to > use SolvereGREMS with PreconditionILU. However, I am getting a

Re: [deal.II] Re: Assemble function, long time

2022-03-14 Thread Hermes Sampedro
Dear Bruno, I have been reading the examples and documents you pointed out. I tried to use SolvereGREMS with PreconditionILU. However, I am getting a running error that I can not really understand when calling PETScWrappers::PreconditionILU preconditioner(system_matrix). However, it seems t

Re: [deal.II] Re: Assemble function, long time

2022-03-11 Thread Hermes Sampedro
Dear Bruno and Wolfgang, thank you very much for your comments and help, it is very helpful. Actually, I think that is what I am experiencing. When running with my actual direct solver a system with 15 elements per direction (4th polynomial order with 0.5million dof), the solver takes 50 secon

Re: [deal.II] Re: Assemble function, long time

2022-03-10 Thread Wolfgang Bangerth
On 3/10/22 07:00, Hermes Sampedro wrote: I am experiencing long computational times with the solver function.  I am trying to use DoFRenumbering::Cuthill_McKee(dof_handler), DoFRenumbering::boost::Cuthill_McKee(dof_handler,false,false) but I get even higher computational times. Am I doing somet

Re: [deal.II] Re: Assemble function, long time

2022-03-10 Thread Bruno Turcksin
Yes, you should use your system_matrix. AdditionalData can be used to modify the parameters used by ILU. The interface of PreconditionILU should work very similarly to BlockJacobi see https://dealii.org/current/doxygen/deal.II/step_17.html#ElasticProblemsolve There are several tutorials that use p

Re: [deal.II] Re: Assemble function, long time

2022-03-10 Thread Hermes Sampedro
Dear Bruno, Thank you very much, I will try this. The last question if it is not too much to ask is about In the PreconditionILU matrix: PETScWrappers::PreconditionILU::PreconditionILU (const MatrixBase & mat

Re: [deal.II] Re: Assemble function, long time

2022-03-10 Thread Bruno Turcksin
If your matrix is symmetric definite positive, you use CG . Otherwise, you use GMRES . Here is the page for ILU

Re: [deal.II] Re: Assemble function, long time

2022-03-10 Thread Hermes Sampedro
Thank you for your suggestions. Could you please suggest me what function can work well for using a Krylov solver? I can no see examples. My actual code is implemented using PETS (for sparsematrix, solver, etc). I can see that SLEPcWrappers::SolverKrylovSchur allows PETS matrices. Thank you aga

Re: [deal.II] Re: Assemble function, long time

2022-03-10 Thread Bruno Turcksin
Hermes, I think Cuthill-McKee only works on symmetric matrices, is your matrix symmetric? Also, the goal of Cuthill-McKee is to help with the fill in of the matrix.There is no guarantee that it helps with the performance. If you don't know which preconditioner to use, you can use ILU (Incomplete L

Re: [deal.II] Re: Assemble function, long time

2022-03-10 Thread Hermes Sampedro
Hi Bruno, Yes, for now, I have to use a direct solver due to the preconditioner. I am experiencing long computational times with the solver function. I am trying to use DoFRenumbering::Cuthill_McKee(dof_handler), DoFRenumbering::boost::Cuthill_McKee(dof_handler,false,false) but I get even h

Re: [deal.II] Re: Assemble function, long time

2022-03-10 Thread Bruno Turcksin
Hermes, For large systems, Krylov solvers are faster and require less memory than direct solvers. Direct solvers scale poorly, in terms of memory and performance, with the number of unknowns. The only problem with Krylov solvers is that you need to use a good preconditioner. The choice of the prec

Re: [deal.II] Re: Assemble function, long time

2022-03-09 Thread Hermes Sampedro
Dear Bruno, Thank you again for your answer. I managed to solve now a system of 3.5 million DOF using the same solver as I posted above, *SparseDirectMUMPS. *Now, in release mode, the assembling takes a few minutes instead of hours, however, the solver function takes approximately 1.5h (p

Re: [deal.II] Re: Assemble function, long time

2022-03-07 Thread Bruno Turcksin
Hermes, The problem is that you are using a direct solver. Direct solvers require a lot of memory because the inverse of a sparse matrix is generally not sparse. If you use a LU decomposition, which I think MUMPS does, you need a dense matrix to store the LU decomposition. That's a lot of memory!

[deal.II] Re: Assemble function, long time

2022-03-06 Thread Hermes Sampedro
Dear Bruno, Thank you very much for the comments. The problem was that I was running in Debug mode without knowing. Now, after changing to Release the assembling time is considerably reduced. Moreover, I am experiencing another issue that I would like to ask. My mesh is done with hyper_cube()

[deal.II] Re: Assemble function, long time

2022-03-03 Thread Bruno Turcksin
Hermes, There is a couple of things that you could do but it probably won't give you a significant speed up. Are you sure that you are running in Release mode and not in Debug? Do you evaluate complicated functions in the assembly? A couple changes that could help: - don't use *fe.system_to_com