If you want a processor independent solve with mumps use '-pc_type lu' and if you are configured with MUMPS it will give you a parallel LU solve. And don't use any overlap in DM. If want a local LU with global 'asm' or 'bjacobi' then you have an iterative solver and use something like -ksp_type gmres.
Mark On Sun, Jul 23, 2023 at 11:50 AM 袁煕 <[email protected]> wrote: > Hi, > > I used PETSc to assemble a FEM stiff matrix of an overlapped (overlap=2) > DMPlex and used the MUMPS solver to solve it. But I got different solution > by using 1 CPU and MPI parallel computation. I am wondering if I missed > some necessary step or setting during my implementation. > > My calling process like follows > > 1. Generate matrix pkij from DMPlex dm_mesh > > ``` > PetscCallA( DMCreateMatrix(dm_mesh, pkij, ierr) ) > PetscCallA( MatSetBlockSize(pkij, 1, ierr)) > if( overlap>0 ) call > MatSetOption(pkij,MAT_IGNORE_OFF_PROC_ENTRIES,PETSC_TRUE,ierr) > PetscCallA( MatSetFromOptions(pkij, ierr)) > > PetscCallA(MatGetLocalSize(pkij, m, n, ierr)); > PetscCallA(MatGetSize(pkij, gm, gn, ierr)); > PetscCallA(MatGetBlockSize(pkij, bs, ierr)); > > ! > ! Create a preallocator matrix with sizes (local and global) matching > the jacobian A. > ! > PetscCallA(MatCreate(PETSC_COMM_WORLD, preallocator, ierr)); > PetscCallA(MatSetType(preallocator, MATPREALLOCATOR, ierr)); > PetscCallA(MatSetSizes(preallocator, m, n, gm, gn, ierr)); > PetscCallA(MatSetBlockSize(preallocator, bs, ierr)); > PetscCallA(MatSetUp(preallocator, ierr)); > ``` > > 2. Do Matrix preallocation > > ``` > PetscCallA(MatCreate(PETSC_COMM_WORLD, preallocator, ierr)); > PetscCallA(MatSetType(preallocator, MATPREALLOCATOR, ierr)); > PetscCallA(MatSetSizes(preallocator, m, n, gm, gn, ierr)); > PetscCallA(MatSetBlockSize(preallocator, bs, ierr)); > PetscCallA(MatSetUp(preallocator, ierr)); > > ....... > > PetscCallA( MatPreallocatorPreallocate(preallocator, PETSC_TRUE, pkij, > ierr) ) > PetscCallA( MatDestroy(preallocator,ierr) ) > ``` > > 3. Assemble matrix pkij by calling MatSetValue of all overlapped elements. > > In my above implementation, I used > MatSetOption(pkij,MAT_IGNORE_OFF_PROC_ENTRIES,PETSC_TRUE,ierr). Is that > correct? Or even other options are needed? > > Thanks for > >
