Hermes,

The problem is that you are using a direct solver. Direct solvers
require a lot of memory because the inverse of a sparse matrix is
generally not sparse. If you use a LU decomposition, which I think
MUMPS does, you need a dense matrix to store the LU decomposition.
That's a lot of memory! You will need to use a Krylov to solve a
problem of this size.

Best,

Bruno

Le dim. 6 mars 2022 à 07:19, Hermes Sampedro <hermesampe...@gmail.com> a écrit :
>
> Dear Bruno,
>
> Thank you very much for the comments. The problem was that I was running in 
> Debug mode without knowing. Now, after changing to Release the assembling 
> time is considerably reduced.
>
> Moreover, I am experiencing another issue that I would like to ask. My mesh 
> is done with hyper_cube() in 3D and 5 refinements. The dof is around 3 
> million. When running, I always get a memory issue and the program stops. I 
> realized that the problem is in the line that executes 
> solver.solve(system_matrix, completely_distributed_solution, system_rhs);
> I am using SparseMatrix and I do not fully understand where the problem could 
> come from. The matrices are initialized beforehand, what reason do you think 
> It could produce a memory issue in the solver?
>
> Below is the full solver function:
>
> template <int dim>
> void LaplaceProblem<dim>::solve()
> {
> PETScWrappers::MPI::Vector 
> completely_distributed_solution(locally_owned_dofs,mpi_communicator);
> SolverControl cn;
> PETScWrappers::SparseDirectMUMPS solver(cn, mpi_communicator);
> solver.solve(system_matrix, completely_distributed_solution, system_rhs);
> constraints.distribute(completely_distributed_solution);
> locally_relevant_solution = completely_distributed_solution;
> }
>
>
> Thank you again for your help
> Regards
> H.
>
> El jueves, 3 de marzo de 2022 a las 15:13:30 UTC+1, bruno.t...@gmail.com 
> escribió:
>>
>> Hermes,
>>
>> There is a couple of things that you could do but it probably won't give you 
>> a significant speed up. Are you sure that you are running in Release mode 
>> and not in Debug? Do you evaluate complicated functions in the assembly?
>> A couple changes that could help:
>> - don't use fe.system_to_component_index(i).first and 
>> fe.system_to_component_index(j).first  everywhere. Just define const k = ... 
>> and const m = ... and use k and m. That might help the compiler with some 
>> optimizations
>> - move the two if for the cell assembly outside the for loop on the 
>> quadrature point, similar to what you did for the boundaries. This could 
>> potentially help quite a bit if the cpu often gets the branch prediction 
>> wrong
>>
>> Best,
>>
>> Bruno
>>
>> On Thursday, March 3, 2022 at 4:31:04 AM UTC-5 hermes...@gmail.com wrote:
>>>
>>> Dear all,
>>>
>>> I am experiencing long times when computing the assembling and I would like 
>>> to ask if this is common or there is something wrong with my implementation.
>>>
>>> My model is built in a similar way as step-29 and step-40 (using complex 
>>> values ad solving with a direct solver using distributed parallel 
>>> implementation).
>>> Now I am running larger systems with 3.5million dof and the assembling took 
>>> 16h, while the solver function took much less.
>>>
>>>  I can show the structure of my assembly_system() function to ask if there 
>>> is something that can be done in order to speed up the process:
>>>
>>> void Problem<dim>::assemble_system()
>>> {
>>> for (unsigned int i = 0; i < dofs_per_cell; ++i) {
>>> for (unsigned int j = 0; j < dofs_per_cell; ++j)
>>> {
>>> for (unsigned int q_point = 0; q_point < n_q_points; ++q_point)
>>> {
>>> if (fe.system_to_component_index(i).first == 
>>> fe.system_to_component_index(j).first)
>>> {
>>> cell_matrix(i, j) += ....
>>> }
>>> if (fe.system_to_component_index(i).first != 
>>> fe.system_to_component_index(j).first)
>>> {
>>> cell_matrix(i, j) += ....
>>> }
>>> }
>>>
>>> // Boundaries
>>> if (fe.system_to_component_index(i).first == 
>>> fe.system_to_component_index(j).first)
>>> {
>>> for (unsigned int face_no : GeometryInfo<dim>::face_indices())
>>> if (cell->face(face_no)->at_boundary() && 
>>> (cell->face(face_no)->boundary_id() == 0))
>>> {
>>> fe_face_values.reinit(cell, face_no);
>>> for (unsigned int q_point = 0; q_point < n_face_q_points; ++q_point)
>>> cell_matrix(i, j) += ....
>>> }
>>> }
>>> if (fe.system_to_component_index(i).first != 
>>> fe.system_to_component_index(j).first)
>>> {
>>> for (unsigned int face_no : GeometryInfo<dim>::face_indices())
>>> {
>>> if (cell->face(face_no)->at_boundary() && 
>>> (cell->face(face_no)->boundary_id() == 0))
>>> {
>>> fe_face_values.reinit(cell, face_no);
>>> for (unsigned int q_point = 0; q_point < n_face_q_points;++q_point)
>>> cell_matrix(i, j) += ....
>>> }
>>> }
>>> }
>>> }
>>> }
>>>
>>>
>>> Thank you very much.
>>> Regards,
>>> Hermes
>
> --
> The deal.II project is located at http://www.dealii.org/
> For mailing list/forum options, see 
> https://groups.google.com/d/forum/dealii?hl=en
> ---
> You received this message because you are subscribed to a topic in the Google 
> Groups "deal.II User Group" group.
> To unsubscribe from this topic, visit 
> https://groups.google.com/d/topic/dealii/KQGIxkJZL6w/unsubscribe.
> To unsubscribe from this group and all its topics, send an email to 
> dealii+unsubscr...@googlegroups.com.
> To view this discussion on the web visit 
> https://groups.google.com/d/msgid/dealii/d80cbd1c-7d1f-4042-a9e8-e8382a721790n%40googlegroups.com.

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/dealii/CAGVt9eOBuada8yWTL3%3D9QGYiEodDePBscNWYZNCOPw6G%2B6qh3g%40mail.gmail.com.

Reply via email to