Dear deal.ii community, Apologies for so many questions lately... but here is another one. This one is a bit technical and aims at good solvers. In a simplified form you can formulate it this way:
I implemented a PDE solver with a for the 3D vector Laplacian *curl(curl u) - grad(div u) = f*. I use a Nedelec-Raviart-Thomas pairing to solve the mixed form which reads *sigma - curl u = 0* *curl sigma - grad(div u) = f* You can think of this as a Helmholtz decomposition of the vector field *f*. I am using Nedelec elements for *sigma* and Raviart-Thomas elements for *u*. Fine so far. Besides the fact that this is technically not so easy when it comes to certain domains with holes etc (forget about non-trivial harmonic forms if this tells you something - I don't have that here) it seems quite challenging to come up with a good solver (what is "good" may depend on *f* as well). The system that needs to be solved formally reads *(P) A sigma + B^T u = 0* * B sigma + C u = f* *A* is sort of a mass matrix so it is positive (corresponds to *<tau,sigma>* in *H(curl)*) and *C* is positive semi-definite (corresponds to *<div v, div u>* in *H(div)*). First thing that crossed my mind was a Schur complement solver. Now one a priori has two options: 1.) Solve first for u and then for sigma, i.e. invert *A* and use the Schur complement *S = -B*A^{-1}*B^T + C* 2.) The other way around *S = -B^T*C^{-1}*B + A*. My gut feeling tells me that this could actually turn out to be not an option since either *C* is very ill-conditioned or not invertible at all. It is symmetric and contains at least one zero eigenvalue. Thinking of it: it formally corresponds to *C~(grad div)^{-1}* (and I am of course having convergence issues) Using approach 1.) I get timing results like that: (using MPI with Petsc, AMG preconditioning for *A* on one machine with four cores, no preconditioning for the Schur complement *S = -B*A^{-1}*B^T + C* from 1.) Running using PETSc. Number of active cells: 32768 Total number of cells: 21449 (on 6 levels) Number of degrees of freedom: 205920 (104544+101376) Iterative Schur complement solver converged in 177 iterations. Outer solver completed. +---------------------------------------------+------------+------------+ | Total wallclock time elapsed since start | 37s | | | | | | | Section | no. calls | wall time | % of total | +---------------------------------+-----------+------------+------------+ | Schur complement solver (for u) | 1 | 27.2s | 74% | | assembly | 1 | 6.61s | 18% | | mesh generation | 1 | 0.163s | 0.44% | | outer CG solver (for sigma) | 1 | 0.136s | 0.37% | | system and constraint setup | 1 | 1.15s | 3.1% | | vtu output | 1 | 1.68s | 4.5% | +---------------------------------+-----------+------------+------------+ You will notice that the time for the Schur complement solver is dominating. So either I did something wrong in the implementation conceptually (see class for Schur complement below) or I need to do a better job in solving/preconditioning the system. Note that the Schur complement is usually not definite in contrast to Stokes or Darcy systems. Does anyone have an idea of how to attack systems like *(P)*? Are there optimal preconditioners? Any hint would be appreciated. Are there relevant tutorials? Parallelization is an issue since I would like to solve very large problems in 3D. Thanks in advance and best regards, Konrad :-) Here the Schur complement code: template <typename BlockMatrixType, typename VectorType, typename PreconditionerType> class SchurComplementMPI : public Subscriptor { private: using BlockType = typename BlockMatrixType::BlockType; public: SchurComplementMPI (const BlockMatrixType &system_matrix, const InverseMatrix<BlockType, PreconditionerType> &relevant_inverse_matrix, const std::vector<IndexSet> &owned_partitioning, MPI_Comm mpi_communicator); void vmult (VectorType &dst, const VectorType &src) const; private: const SmartPointer<const BlockMatrixType > system_matrix; const SmartPointer<const InverseMatrix<BlockType, PreconditionerType>> relevant_inverse_matrix; const std::vector<IndexSet>& owned_partitioning; MPI_Comm mpi_communicator; mutable VectorType tmp1, tmp2, tmp3; }; template <typename BlockMatrixType, typename VectorType, typename PreconditionerType> SchurComplementMPI<BlockMatrixType, VectorType, PreconditionerType>::SchurComplementMPI( const BlockMatrixType &system_matrix, const InverseMatrix<BlockType, PreconditionerType> &relevant_inverse_matrix, const std::vector<IndexSet> &owned_partitioning, MPI_Comm mpi_communicator) : system_matrix (&system_matrix), relevant_inverse_matrix (&relevant_inverse_matrix), owned_partitioning(owned_partitioning), mpi_communicator(mpi_communicator), tmp1 (owned_partitioning[0], mpi_communicator), tmp2 (owned_partitioning[0], mpi_communicator), tmp3 (owned_partitioning[1], mpi_communicator) {} template <typename BlockMatrixType, typename VectorType, typename PreconditionerType> void SchurComplementMPI<BlockMatrixType, VectorType, PreconditionerType>::vmult( VectorType &dst, const VectorType &src) const { system_matrix->block(0,1).vmult (tmp1, src); relevant_inverse_matrix->vmult (tmp2, tmp1); system_matrix->block(1,0).vmult (dst, tmp2); system_matrix->block(1,1).vmult(tmp3, src); dst -= tmp3; } -- The deal.II project is located at http://www.dealii.org/ For mailing list/forum options, see https://groups.google.com/d/forum/dealii?hl=en --- You received this message because you are subscribed to the Google Groups "deal.II User Group" group. To unsubscribe from this group and stop receiving emails from it, send an email to dealii+unsubscr...@googlegroups.com. To view this discussion on the web visit https://groups.google.com/d/msgid/dealii/0b579e56-87bb-46ac-86bb-15550be6a650%40googlegroups.com.