If your matrix is symmetric definite positive, you use CG
<https://dealii.org/current/doxygen/deal.II/classPETScWrappers_1_1SolverCG.html>.
Otherwise, you use GMRES
<https://dealii.org/current/doxygen/deal.II/classPETScWrappers_1_1SolverGMRES.html>.
Here is the page for ILU
<https://dealii.org/current/doxygen/deal.II/classPETScWrappers_1_1PreconditionILU.html>

Bruno

Le jeu. 10 mars 2022 à 09:18, Hermes Sampedro <hermesampe...@gmail.com> a
écrit :
>
> Thank you for your suggestions. Could you please suggest me what function
can work well for using a Krylov solver? I can no see examples.
> My actual code is implemented using PETS (for sparsematrix, solver, etc).
I can see that SLEPcWrappers::SolverKrylovSchur allows PETS matrices.
>
>
> Thank you again
>
> El jueves, 10 de marzo de 2022 a las 15:12:19 UTC+1, bruno.t...@gmail.com
escribió:
>>
>> Hermes,
>>
>> I think Cuthill-McKee only works on symmetric matrices, is your matrix
>> symmetric? Also, the goal of Cuthill-McKee is to help with the fill in
>> of the matrix.There is no guarantee that it helps with the
>> performance. If you don't know which preconditioner to use, you can
>> use ILU (Incomplete LU decomposition). Basically, you use a direct
>> solver but you drop all the "small" entries in the matrix. It's not
>> the best preconditioner but you can control how much time you spend in
>> the "direct solver". The problem with direct solvers is that there is
>> not much you can do to speed them up. In practice, everybody uses
>> Krylov solvers because of the problems you are encountering now.
>>
>> Best,
>>
>> Bruno
>>
>> Le jeu. 10 mars 2022 à 09:00, Hermes Sampedro
>> <hermes...@gmail.com> a écrit :
>> >
>> > Hi Bruno,
>> >
>> > Yes, for now, I have to use a direct solver due to the preconditioner.
>> > I am experiencing long computational times with the solver function. I
am trying to use DoFRenumbering::Cuthill_McKee(dof_handler),
DoFRenumbering::boost::Cuthill_McKee(dof_handler,false,false)
>> > but I get even higher computational times. Am I doing something wrong?
>> >
>> > In the setup_system() function I do:
>> > dof_handler.distribute_dofs(fe);
>> > DoFRenumbering::Cuthill_McKee(dof_handler);
>> >
>> > Then thee solver is
>> > void LaplaceProblem<dim>::solve()
>> > {
>> > PETScWrappers::MPI::Vector
completely_distributed_solution(locally_owned_dofs,mpi_communicator);
>> > SolverControl cn;
>> > PETScWrappers::SparseDirectMUMPS solver(cn, mpi_communicator);
>> > solver.solve(system_matrix, completely_distributed_solution,
system_rhs);
>> > constraints.distribute(completely_distributed_solution);
>> > locally_relevant_solution = completely_distributed_solution;
>> > }
>> >
>> > Thank you
>> > Regards,
>> > H
>> >
>> > El jueves, 10 de marzo de 2022 a las 14:54:13 UTC+1,
bruno.t...@gmail.com escribió:
>> >>
>> >> Hermes,
>> >>
>> >> For large systems, Krylov solvers are faster and require less memory
>> >> than direct solvers. Direct solvers scale poorly, in terms of memory
>> >> and performance, with the number of unknowns. The only problem with
>> >> Krylov solvers is that you need to use a good preconditioner. The
>> >> choice of the preconditioner depends on the system that you want to
>> >> solve.
>> >>
>> >> Best,
>> >>
>> >> Bruno
>> >>
>> >> Le jeu. 10 mars 2022 à 02:51, Hermes Sampedro
>> >> <hermes...@gmail.com> a écrit :
>> >> >
>> >> > Dear Bruno,
>> >> >
>> >> > Thank you again for your answer.
>> >> >
>> >> > I managed to solve now a system of 3.5 million DOF using the same
solver as I posted above, SparseDirectMUMPS. Now, in release mode, the
assembling takes a few minutes instead of hours, however, the solver
function takes approximately 1.5h (per frequency iteration) using 40
processes in parallel (similar to step-40).
>> >> >
>> >> > I was expecting to get faster performance when running in parallel
with 40 processes, especially because I need to run for several
frequencies. I would like to ask if you also would expect faster
performance. Would that be solved using the solver that you suggested
(Krylov)?
>> >> >
>> >> >
>> >> > Thank you
>> >> >
>> >> > Regards,
>> >> >
>> >> > H
>> >> >
>> >> >
>> >> > El lunes, 7 de marzo de 2022 a las 15:04:19 UTC+1,
bruno.t...@gmail.com escribió:
>> >> >>
>> >> >> Hermes,
>> >> >>
>> >> >> The problem is that you are using a direct solver. Direct solvers
>> >> >> require a lot of memory because the inverse of a sparse matrix is
>> >> >> generally not sparse. If you use a LU decomposition, which I think
>> >> >> MUMPS does, you need a dense matrix to store the LU decomposition.
>> >> >> That's a lot of memory! You will need to use a Krylov to solve a
>> >> >> problem of this size.
>> >> >>
>> >> >> Best,
>> >> >>
>> >> >> Bruno
>> >> >>
>> >> >> Le dim. 6 mars 2022 à 07:19, Hermes Sampedro <hermes...@gmail.com>
a écrit :
>> >> >> >
>> >> >> > Dear Bruno,
>> >> >> >
>> >> >> > Thank you very much for the comments. The problem was that I was
running in Debug mode without knowing. Now, after changing to Release the
assembling time is considerably reduced.
>> >> >> >
>> >> >> > Moreover, I am experiencing another issue that I would like to
ask. My mesh is done with hyper_cube() in 3D and 5 refinements. The dof is
around 3 million. When running, I always get a memory issue and the program
stops. I realized that the problem is in the line that executes
solver.solve(system_matrix, completely_distributed_solution, system_rhs);
>> >> >> > I am using SparseMatrix and I do not fully understand where the
problem could come from. The matrices are initialized beforehand, what
reason do you think It could produce a memory issue in the solver?
>> >> >> >
>> >> >> > Below is the full solver function:
>> >> >> >
>> >> >> > template <int dim>
>> >> >> > void LaplaceProblem<dim>::solve()
>> >> >> > {
>> >> >> > PETScWrappers::MPI::Vector
completely_distributed_solution(locally_owned_dofs,mpi_communicator);
>> >> >> > SolverControl cn;
>> >> >> > PETScWrappers::SparseDirectMUMPS solver(cn, mpi_communicator);
>> >> >> > solver.solve(system_matrix, completely_distributed_solution,
system_rhs);
>> >> >> > constraints.distribute(completely_distributed_solution);
>> >> >> > locally_relevant_solution = completely_distributed_solution;
>> >> >> > }
>> >> >> >
>> >> >> >
>> >> >> > Thank you again for your help
>> >> >> > Regards
>> >> >> > H.
>> >> >> >
>> >> >> > El jueves, 3 de marzo de 2022 a las 15:13:30 UTC+1,
bruno.t...@gmail.com escribió:
>> >> >> >>
>> >> >> >> Hermes,
>> >> >> >>
>> >> >> >> There is a couple of things that you could do but it probably
won't give you a significant speed up. Are you sure that you are running in
Release mode and not in Debug? Do you evaluate complicated functions in the
assembly?
>> >> >> >> A couple changes that could help:
>> >> >> >> - don't use fe.system_to_component_index(i).first and
fe.system_to_component_index(j).first everywhere. Just define const k = ...
and const m = ... and use k and m. That might help the compiler with some
optimizations
>> >> >> >> - move the two if for the cell assembly outside the for loop on
the quadrature point, similar to what you did for the boundaries. This
could potentially help quite a bit if the cpu often gets the branch
prediction wrong
>> >> >> >>
>> >> >> >> Best,
>> >> >> >>
>> >> >> >> Bruno
>> >> >> >>
>> >> >> >> On Thursday, March 3, 2022 at 4:31:04 AM UTC-5
hermes...@gmail.com wrote:
>> >> >> >>>
>> >> >> >>> Dear all,
>> >> >> >>>
>> >> >> >>> I am experiencing long times when computing the assembling and
I would like to ask if this is common or there is something wrong with my
implementation.
>> >> >> >>>
>> >> >> >>> My model is built in a similar way as step-29 and step-40
(using complex values ad solving with a direct solver using distributed
parallel implementation).
>> >> >> >>> Now I am running larger systems with 3.5million dof and the
assembling took 16h, while the solver function took much less.
>> >> >> >>>
>> >> >> >>> I can show the structure of my assembly_system() function to
ask if there is something that can be done in order to speed up the process:
>> >> >> >>>
>> >> >> >>> void Problem<dim>::assemble_system()
>> >> >> >>> {
>> >> >> >>> for (unsigned int i = 0; i < dofs_per_cell; ++i) {
>> >> >> >>> for (unsigned int j = 0; j < dofs_per_cell; ++j)
>> >> >> >>> {
>> >> >> >>> for (unsigned int q_point = 0; q_point < n_q_points; ++q_point)
>> >> >> >>> {
>> >> >> >>> if (fe.system_to_component_index(i).first ==
fe.system_to_component_index(j).first)
>> >> >> >>> {
>> >> >> >>> cell_matrix(i, j) += ....
>> >> >> >>> }
>> >> >> >>> if (fe.system_to_component_index(i).first !=
fe.system_to_component_index(j).first)
>> >> >> >>> {
>> >> >> >>> cell_matrix(i, j) += ....
>> >> >> >>> }
>> >> >> >>> }
>> >> >> >>>
>> >> >> >>> // Boundaries
>> >> >> >>> if (fe.system_to_component_index(i).first ==
fe.system_to_component_index(j).first)
>> >> >> >>> {
>> >> >> >>> for (unsigned int face_no : GeometryInfo<dim>::face_indices())
>> >> >> >>> if (cell->face(face_no)->at_boundary() &&
(cell->face(face_no)->boundary_id() == 0))
>> >> >> >>> {
>> >> >> >>> fe_face_values.reinit(cell, face_no);
>> >> >> >>> for (unsigned int q_point = 0; q_point < n_face_q_points;
++q_point)
>> >> >> >>> cell_matrix(i, j) += ....
>> >> >> >>> }
>> >> >> >>> }
>> >> >> >>> if (fe.system_to_component_index(i).first !=
fe.system_to_component_index(j).first)
>> >> >> >>> {
>> >> >> >>> for (unsigned int face_no : GeometryInfo<dim>::face_indices())
>> >> >> >>> {
>> >> >> >>> if (cell->face(face_no)->at_boundary() &&
(cell->face(face_no)->boundary_id() == 0))
>> >> >> >>> {
>> >> >> >>> fe_face_values.reinit(cell, face_no);
>> >> >> >>> for (unsigned int q_point = 0; q_point <
n_face_q_points;++q_point)
>> >> >> >>> cell_matrix(i, j) += ....
>> >> >> >>> }
>> >> >> >>> }
>> >> >> >>> }
>> >> >> >>> }
>> >> >> >>> }
>> >> >> >>>
>> >> >> >>>
>> >> >> >>> Thank you very much.
>> >> >> >>> Regards,
>> >> >> >>> Hermes
>> >> >> >
>> >> >> > --
>> >> >> > The deal.II project is located at http://www.dealii.org/
>> >> >> > For mailing list/forum options, see
https://groups.google.com/d/forum/dealii?hl=en
>> >> >> > ---
>> >> >> > You received this message because you are subscribed to a topic
in the Google Groups "deal.II User Group" group.
>> >> >> > To unsubscribe from this topic, visit
https://groups.google.com/d/topic/dealii/KQGIxkJZL6w/unsubscribe.
>> >> >> > To unsubscribe from this group and all its topics, send an email
to dealii+un...@googlegroups.com.
>> >> >> > To view this discussion on the web visit
https://groups.google.com/d/msgid/dealii/d80cbd1c-7d1f-4042-a9e8-e8382a721790n%40googlegroups.com
.
>> >> >
>> >> > --
>> >> > The deal.II project is located at http://www.dealii.org/
>> >> > For mailing list/forum options, see
https://groups.google.com/d/forum/dealii?hl=en
>> >> > ---
>> >> > You received this message because you are subscribed to a topic in
the Google Groups "deal.II User Group" group.
>> >> > To unsubscribe from this topic, visit
https://groups.google.com/d/topic/dealii/KQGIxkJZL6w/unsubscribe.
>> >> > To unsubscribe from this group and all its topics, send an email to
dealii+un...@googlegroups.com.
>> >> > To view this discussion on the web visit
https://groups.google.com/d/msgid/dealii/0438f349-0ecd-4503-944c-6a5068599ccfn%40googlegroups.com
.
>> >
>> > --
>> > The deal.II project is located at http://www.dealii.org/
>> > For mailing list/forum options, see
https://groups.google.com/d/forum/dealii?hl=en
>> > ---
>> > You received this message because you are subscribed to a topic in the
Google Groups "deal.II User Group" group.
>> > To unsubscribe from this topic, visit
https://groups.google.com/d/topic/dealii/KQGIxkJZL6w/unsubscribe.
>> > To unsubscribe from this group and all its topics, send an email to
dealii+un...@googlegroups.com.
>> > To view this discussion on the web visit
https://groups.google.com/d/msgid/dealii/cb8066c4-d160-4ebb-977b-d96ca07a93e5n%40googlegroups.com
.
>
> --
> The deal.II project is located at http://www.dealii.org/
> For mailing list/forum options, see
https://groups.google.com/d/forum/dealii?hl=en
> ---
> You received this message because you are subscribed to a topic in the
Google Groups "deal.II User Group" group.
> To unsubscribe from this topic, visit
https://groups.google.com/d/topic/dealii/KQGIxkJZL6w/unsubscribe.
> To unsubscribe from this group and all its topics, send an email to
dealii+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
https://groups.google.com/d/msgid/dealii/bea0a976-3413-4e85-aade-a05934440648n%40googlegroups.com
.

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/dealii/CAGVt9eM4n_5%2BbU_1tGKRJWDm6LG8wHqg0%2B7nqKQuhAbYTZtoGQ%40mail.gmail.com.

Reply via email to