[deal.II] Error loading petsc shared library in step-40 when distributing cell matrix and rhs separately

2022-01-19 Thread Lucas Myers
Hi folks, For debugging purposes, I need to use the `distribute_local_to_global` function which can distribute vectors and matrices separately (this guy in the docs ). However, when I try

Re: [deal.II] PETScWrappers::MPI::BlockSparseMatrix

2022-01-19 Thread Wolfgang Bangerth
On 1/19/22 03:41, Мария Бронзова wrote: I checked the member function documentation for the solve() function of MUMPS and found only the solve() function that depends on the *const MatrixBase, *but I guess in my case *const* *BlockMatrixBase * would make more sense. Could you please tell me,

Re: [deal.II] PETScWrappers::MPI::BlockSparseMatrix

2022-01-19 Thread Мария Бронзова
Dear Wolfgang, Thank you a lot for the explanation, I got PETSc preinstalled already and didn't have a chance to see there were different compiling options. Now it's clear! Anyway, I ran into a further issue with my script that I wanted to parallelize. I am dealing with non spd matrices in ter