Dear Pai, I'm very interested in solving a problem with characteristics very similar to yours. Consequently, I run your modified code of step-17.cc for 30*30*30 cells and for me it takes 6.43s with cg with -np 2 instead of 0.39s. Do you have any idea where this huge speed up migth come from? Is it maybe due some optimezed libraries that you are using? Right now I'm using the system default's one that I can find in the OS repositories. I would really appreciated if you could give me some hint about this, or which strategy you found more effective for solving many times the same elastic system with different rhs.
Best regardsm David. On Saturday, 1 September 2018 04:56:32 UTC+2, Pai Liu wrote: > > Hi Wolfgang, > > Thank you so much for all your detailed explanation. Now I have a general > idea of what all these things are and what I should do for my problem (a > multiple load case problem). I really appreciate your kind help. > > BlockJacobi builds an LU decomposition of that part of the matrix that is >> stored locally. So it's a really expensive preconditioner to build (which >> I >> gather you don't include in the time?) but then solves the problem in >> only a >> few iterations. If you want a fair comparison, you need to include the >> time to >> build the preconditioner. > > However, I really like to figure out the problem of timing you mentioned. > Here I attach the step-17.cc I modified. > *I just added timing and modify the meshing code (to generate 30*30*30 > cells), and nothing else.* > I added timing to the member function solve() like the following and > change nothing else. > > template <int dim> > unsigned int ElasticProblem<dim>::solve () > { > *TimerOutput::Scope t(computing_timer, "solve");* > > SolverControl solver_control (solution.size(), > 1e-8*system_rhs.l2_norm()); > PETScWrappers::SolverCG cg (solver_control, > mpi_communicator); > > PETScWrappers::PreconditionBlockJacobi preconditioner(system_matrix); > > cg.solve (system_matrix, solution, system_rhs, > preconditioner); > > Vector<double> localized_solution (solution); > > hanging_node_constraints.distribute (localized_solution); > > solution = localized_solution; > > return solver_control.last_step(); > } > > *Thus I think the timing includes both the time to build the > preconditoner, as well as the time to solve Ax=b;* > > *And when I run this file with mpirun -np 2 ./step-17, it really just > takes about 0.4s to solve a 30*30*30 cells problem (with all the boundary > conditions unchanged from the original step-17 example).* > > Best, > Pai > -- The deal.II project is located at http://www.dealii.org/ For mailing list/forum options, see https://groups.google.com/d/forum/dealii?hl=en --- You received this message because you are subscribed to the Google Groups "deal.II User Group" group. To unsubscribe from this group and stop receiving emails from it, send an email to dealii+unsubscr...@googlegroups.com. For more options, visit https://groups.google.com/d/optout.