> I wonder will the unmatched partitioners occur even coding is correct?
Yes. The requirements of MG and the level operators are not identical. If you don't do anything special, the vector are set up for MG and not the level operators. With the help of external partitioners you can fix this "problem". I guess you could introduce a new function MGTransferBlockMatrixFree::build(). Shouldn't to be too much work, since all you need to do is to delegate the task to MGTransferMatrixFree::build(). Happy to accept a patch! Peter On Sunday, 7 May 2023 at 08:18:37 UTC+2 yy.wayne wrote: > Thank you Peter, > > Currently I'm using MGTransferBlockMatrixFree where build with external > partitioners is not supported. > But before moving to modify Block mg transfers, I wonder will the > unmatched partitioners occur even coding is correct? > My problem is preconditioned with global coarsening h-MG, with 6 dofs per > node for real and imaginary vectors in 3d, > BlockVectors are applied except for trilinos direct solver on coarsest > level, where only Vectors rather than BlockVector > is supported. Transfer between Vector and BlockVector is a potential risk > though. > > Best, > Wayne > > 在2023年5月6日星期六 UTC+8 16:41:40<peterr...@gmail.com> 写道: > >> Hi Wayne, >> >> my best guess it that the partitioners in the multigrid operator and the >> matrix-free operators not match, with the result that a vector with wrong >> ghosting coming from the multigrid operator is used as src/dst in the >> matrix-free loop resulting in the assert you show. Previously, one would >> need to fix the the ghosting in the operator (see >> https://github.com/dealii/dealii/blob/a091f25c5ff77687fbd8a65f5b336b235f201100/include/deal.II/matrix_free/operators.h#L489-L498) >> >> but nowadays one can pass in external partitioners to MG: see, e.g., >> https://github.com/peterrum/dealii-multigrid/blob/c50581883c0dbe35c83132699e6de40da9b1b255/multigrid_throughput.cc#L1802. >> >> Similarly one can do this in the context of global-coarsening MG. >> >> Hope this helps! >> >> Peter >> >> On Saturday, 6 May 2023 at 10:22:58 UTC+2 yy.wayne wrote: >> >>> It might fails too for 1st basis order too ... >>> But all fails occur when mpi process > 2. If I run with only 2 mpi >>> processes, no error shows. >>> >>> 在2023年5月6日星期六 UTC+8 15:22:23<yy.wayne> 写道: >>> >>>> Hello my friends, >>>> >>>> I'm testing a matrix-free code that, preconditioned by a geometric >>>> multigrid. >>>> The code is not stable that it fails for some fe_degree + coarse_level >>>> + total_mesh levels >>>> combination. >>>> For example, any n_mpi_process + 1st order basis elements run >>>> succesfully. >>>> But for 2nd order basis, min_level=0 + total_levels=3(*0*,1 and 2) >>>> or min_level=1 + total_levels=5(0,*1*,2,3 and 4) fails *during >>>> SolverGMRES.solve()*. >>>> The error in debug mode is: >>>> >>>> An error occurred in line <3403> of file >>>> </home/ubuntu/deal.II/installed/include/deal.II/matrix_free/matrix_free.h> >>>> in function >>>> unsigned int dealii::internal::VectorDataExchange<dim, Number, >>>> VectorizedArrayType>::find_vector_in_mf(const VectorType&, bool) const >>>> [with VectorType = dealii::LinearAlgebra::distributed::Vector<double>; int >>>> dim = 3; Number = double; VectorizedArrayType = >>>> dealii::VectorizedArray<double, 2>] >>>> The violated condition was: >>>> false >>>> Additional information: >>>> Could not find partitioner that fits vector >>>> >>>> It might be my own coding mistake. I'm using 9.3.0 may upgrade dealii >>>> to 9.4.0 help ? >>>> >>>> -- The deal.II project is located at http://www.dealii.org/ For mailing list/forum options, see https://groups.google.com/d/forum/dealii?hl=en --- You received this message because you are subscribed to the Google Groups "deal.II User Group" group. To unsubscribe from this group and stop receiving emails from it, send an email to dealii+unsubscr...@googlegroups.com. To view this discussion on the web visit https://groups.google.com/d/msgid/dealii/f4918777-9579-485a-8c81-de835cc08ef1n%40googlegroups.com.