Thanks for you response, Wolfgang. The triangulation partitioning makes a lot of sense now.
In that case, perhaps my problem is not too difficult. I am getting caught up with the PETScWrappers::MPI::BlockSparseMatrix.reinit() method. This is my approach: Distributing dofs and collecting the locally owned and locally relevant sets. // Solid dof_handler_sld.distribute_dofs(fe_sld); sld_owned_dofs = dof_handler_sld.locally_owned_dofs(); sld_relevant_dofs = DoFTools::extract_locally_relevant_dofs(dof_handler_sld); // Shell dof_handler_sh.distribute_dofs(fe_sh); sh_owned_dofs = dof_handler_sh.locally_owned_dofs(); sh_relevant_dofs = DoFTools::extract_locally_relevant_dofs(dof_handler_sh); Then I apply boundary conditions to both solid and shell dof_handlers. Then I define the coupling points and retrieve the coupled DOFs: sld_coup_dofs = nodal_coupling(coup_points , dof_handler_sld, tria_pft_sld.n_vertices() ); sh_coup_dofs = nodal_coupling(coup_points , dof_handler_sh, tria_pft_sh.n_vertices() ); const std::vector<unsigned int> dofs_per_block = {n_dof_sld, n_dof_sh}; const std::vector<unsigned int> locally_owned_sizes = {sld_owned_dofs.size(), sh_owned_dofs.size()}; I reinitialize each of the blocks in my sparsity pattern: BlockDynamicSparsityPattern dsp(dofs_per_block , dofs_per_block); for (unsigned int i=0; i<dofs_per_block.size(); ++i) for (unsigned int j=0; j<dofs_per_block.size(); ++j) dsp.block(i,j).reinit(dofs_per_block[i], dofs_per_block[j]); Then I use the solid and shell dof handlers to set up sub-matrices A and D. dsp.collect_sizes(); DoFTools::make_sparsity_pattern(dof_handler_sld, dsp.block(0,0)); // A DoFTools::make_sparsity_pattern(dof_handler_sh, dsp.block(1,1)); // D Then, for the off-diagonal sub-matrices B and D, I add the coupled DOFs to the sparsity pattern. for (unsigned int s = 0; s < sld_coup_dofs.size() ; s++) for (unsigned int d = 0; d < 3 ; d++) { dsp.block(0,1).add(sld_coup_dofs[s][d] , sh_coup_dofs[s][d] ); // B dsp.block(1,0).add(sh_coup_dofs[s][d] , sld_coup_dofs[s][d] ); // C } Where I start to get confused is when I try to reinitialize the system matrix. What I want to do is something like: system_matrix.reinit(dsp, mpi_communicator); This has my MPI communicator and the sparsity pattern that I’ve built up. However, this isn’t a valid call to PETScWrappers::MPI::BlockSparseMatrix.reinit(). There’s a similar function that takes the arguments ( *const std::vector< IndexSet <https://www.dealii.org/current/doxygen/deal.II/classIndexSet.html> > &sizes,* const BlockDynamicSparsityPattern <https://www.dealii.org/current/doxygen/deal.II/classBlockDynamicSparsityPattern.html> &bdsp, const MPI_Comm <https://www.dealii.org/current/doxygen/deal.II/classMPI__Comm.html> com ) I don’t really understand what I would put in for the "sizes" vector. What exactly am I trying to pass with this argument? Is it all of the locally owned/relevant dofs? Do I just combine the vector of locally owned shell dofs and locally owned solid dofs? Thanks, Alex On Wednesday, January 17, 2024 at 8:24:44 PM UTC-5 Wolfgang Bangerth wrote: > On 1/17/24 14:40, Alex Quinlan wrote: > > > > If I do this, then it seems that I'd end up with two conflicting > > partitions. If I have n_mpi_processes=8 , for example, this approach > > would assign ~1/8th of the solid mesh to process 1 AND ~1/8th of the > > shell mesh to process 1. This seems like a problem. > > No, this is exactly what you want. That's because you will have phases > in your program when you work on the solid mesh, and you want all 8 > processes to participate in that. And then you have a phase where you > going to do something on the surface mesh and, again, you want to have > all 8 processes work on that. > > It is *so* much harder to write software in which different processes do > different things at the same time. Don't go there. > > > > As you pointed out, I will have an issue with the off-diagonal matrices > > and the coupling between the two triangulations. Is it possible for me > > to build up all of the sparsity patterns and matrices /and then /perform > > the partitioning? > > You could of course, but then what's the point of using multiple > processors if you do the bulk of the work on only one process? > > Best > W. > -- The deal.II project is located at http://www.dealii.org/ For mailing list/forum options, see https://groups.google.com/d/forum/dealii?hl=en --- You received this message because you are subscribed to the Google Groups "deal.II User Group" group. To unsubscribe from this group and stop receiving emails from it, send an email to dealii+unsubscr...@googlegroups.com. To view this discussion on the web visit https://groups.google.com/d/msgid/dealii/6c55938a-2132-4747-ac3d-712669ded3a1n%40googlegroups.com.