Re: [deal.II] Re: Memory usage using MPI/PETSc on single processor

2016-07-29 Thread Timo Heister
I think the reason is that step-17 uses an inefficient constructor for the system matrix: system_matrix.reinit (mpi_communicator, dof_handler.n_dofs(), dof_handler.n_dofs(), n_local_dofs, n_l

[deal.II] Re: Step-35 Poiseuille flow in a 3D pipe

2016-07-29 Thread Jiaqi ZHANG
Hello David Wells, Sorry to bother you, but I have a problem about Step-35. I have posted the question a few days ago, and no one answered me, so I have to search in the mailing list and found that you have been using it a lot. I was wondering if you could help me, and the following is my q

[deal.II] Re: Memory usage using MPI/PETSc on single processor

2016-07-29 Thread Pete Griffin
I did another plot (see attached) of Memory usage vs. #DOF with step-18. This is, as the other two, a 3d elasticity problem. The results in terms of memory usage were in line with step-8 and contrasted step-17. I will try to understand step-18 well enough to transfer the MPI/PETSc stuff to step-

Re: [deal.II] long duration of the setup of step-40 like program

2016-07-29 Thread Vinetou Incucuna
Hello, thank You for response and the advices. >> LA::Vector vec_old_solution(dof_handler_nse.n_dofs()); >> >> VectorTools::interpolate(dof_handler_nse, ZeroFunction<3>(dim + 1), >> vec_old_solution); >> >> old_solution_nse = vec_old_solution; > > > Are yo