Re: [deal.II] Issue encountered while solving Step-40 in 1 dimension

2022-08-22 Thread syed ansari
Thanks, Daniel. This is very helpful. Best Regards, Syed Ansari S. On Mon, Aug 22, 2022 at 7:28 PM Daniel Arndt wrote: > Syed, > > Yes, you should be able to use parallel::shared::Triangulation instead. > > Best, > Daniel > > On Sat, Aug 20, 2022 at 5:25 AM syed ansari wrote: > >> Thanks Danie

Re: [deal.II] get the partition of the system matrix A associated with the unconstrained dofs

2022-08-22 Thread Wolfgang Bangerth
On 8/22/22 10:08, Simon Wiesheier wrote: As stated, what I tried is to use the operator= according to LAPACKFullMatrix new_matrix = my_system_matrix . However, there is an error message "error: conversion from ‘dealii::SparseMatrix’ to non-scalar type ‘dealii::LAPACKFullMatrix’ requested    L

Re: [deal.II] get the partition of the system matrix A associated with the unconstrained dofs

2022-08-22 Thread Simon Wiesheier
Thanks for your input. In the meantime, I replaced the matrix multiplication res = A^_{-1}*B by solving 'p' linear systems A*res[p] = B[p], where p is the number of columns of the matrix B. " That's one way to go. FullMatrix::gauss_jordan() also computes the inverse of a matrix." As stated, what

Re: [deal.II] Re: MPI, synchronize processes

2022-08-22 Thread Wolfgang Bangerth
On 8/22/22 09:55, Uclus Heis wrote: Would be also a poddible solution to export my testvec as it is right now (which contains the global solution) but instead of exporting with all the preocess, call the print function only for one process? Yes. But that runs again into the same issue mentione

Re: [deal.II] Re: MPI, synchronize processes

2022-08-22 Thread Uclus Heis
Dear Wolfgang, Thank you very much for the suggestion. Would be also a poddible solution to export my testvec as it is right now (which contains the global solution) but instead of exporting with all the preocess, call the print function only for one process? Thank you El El lun, 22 ago 2022 a l

Re: [deal.II] Re: MPI, synchronize processes

2022-08-22 Thread Wolfgang Bangerth
On 8/21/22 04:29, Uclus Heis wrote: // /testvec.print(outloop,9,true,false);/ It is clear that the problem I have now is that I am exporting the completely_distributed_solution and that is not what I want. Could you please informe me how to obtain the locally own solution? I can not find the w

Re: [deal.II] Memory error from utilities.cc

2022-08-22 Thread Wolfgang Bangerth
On 8/20/22 12:56, Raghunandan Pratoori wrote:         for (unsigned int i=0; i local_history_values_at_qpoints[i][j].reinit(qf_cell.size()); local_history_fe_values[i][j].reinit(history_fe.dofs_per_cell); history_field_strain[i][j].reinit(history_dof_handler.

Re: [deal.II] get the partition of the system matrix A associated with the unconstrained dofs

2022-08-22 Thread Wolfgang Bangerth
On 8/19/22 13:14, Simon Wiesheier wrote: I also need the system matrix A for a second purpose, namely to compute a matrix multiplication: res = A^{-1} * B , where B is another matrix. -To be more precise, I need the inverse of the 19x19 submatrix corresponding to the unconstrained DoFs only -- n

[deal.II] Re: Solving the linear system of equations using PETSc BlockSparseMatrix

2022-08-22 Thread Bruno Turcksin
Hi, If you search for "block solver" here https://dealii.org/developer/doxygen/deal.II/Tutorial.html, you will see all the tutorials that use block solvers. I think that only deal.II's own solvers support BlockSparseMatrix directly. Best, Bruno On Monday, August 22, 2022 at 9:02:28 AM UTC-4

Re: [deal.II] Issue encountered while solving Step-40 in 1 dimension

2022-08-22 Thread Daniel Arndt
Syed, Yes, you should be able to use parallel::shared::Triangulation instead. Best, Daniel On Sat, Aug 20, 2022 at 5:25 AM syed ansari wrote: > Thanks Daniel for your quick reply. Is it possible to solve the > same problem with parallel::shared::Triangulation for dim ==1? > > On Fri, 19 Aug 20

[deal.II] Solving the linear system of equations using PETSc BlockSparseMatrix

2022-08-22 Thread Masoud Ahmadi
Dear All, The following system of equations: KQ=R where, [image: Screenshot from 2022-08-22 13-45-45.png] were solved using BlockSparseMatrix to form tangent matrix K. It was solved by: SparseDirectUMFPACK A_direct; A_direct.initialize(K); A_direct.vmult(Q_stp, R); Now, I'm trying to run my c