Think I locate the error.
There bug do not come from mesh, but FESystem. For 
tests/multigrid/renumbering_06.cc, if line 90 the finite element is changed 
to 2 components, we get same error.

在2022年11月28日星期一 UTC+8 12:59:59<yy.wayne> 写道:

> The error is from compute_component_wise on level = 0, the result it 
> returns don't equal to dof_handler.n_dofs(level). 
> I added a MPI_Barrier between first and second loop but error is not 
> eliminated. 
> [image: Snipaste_2022-11-28_12-53-02.png]
>
> 在2022年11月28日星期一 UTC+8 12:49:15<Wolfgang Bangerth> 写道:
>
>> On 11/27/22 21:09, 'yy.wayne' via deal.II User Group wrote: 
>> > 
>> > Sorry that I forget to mention *it only breaks when runnning with MPI, 
>> run in 
>> > serialization is good.* PersistentTriangulation Class may not address 
>> this error. 
>> > The intention of write & read a mesh here is to create a not-to-coarse 
>> coarse 
>> > grid (so it's refine several times before output). 
>> > I read the output grid and expect all cells are on level = 0. 
>> Preserving the 
>> > multigrid structure is not preferred in this case. 
>>
>> Oh, I misread your message then. What is the error you observe? 
>>
>> Looking at your code, when you have this... 
>>
>> if(Utilities::MPI::this_mpi_process(MPI_COMM_WORLD) == 0) 
>> { 
>> Triangulation<dim> tria_coarse; 
>> GridGenerator::hyper_shell(tria_coarse, 
>> center, 
>> inner_r, 
>> outer_r, 
>> n_cells); 
>> tria_coarse.refine_global(3); 
>>
>> // write and re-read the mesh, so it becomes coarse mesh 
>> std::cout << "write mesh" << std::endl; 
>> GridOut grid_out; 
>> grid_out.set_flags(GridOutFlags::Vtu(true)); 
>> std::ofstream out("coarse_mesh.vtk"); 
>> grid_out.write_vtk(tria_coarse, out); 
>> out.close(); 
>> } 
>>
>> {// read coarse grid 
>> tria.clear(); 
>> std::cout << "read mesh" << std::endl; 
>> GridIn<dim> gridin; 
>> gridin.attach_triangulation(tria); 
>> std::ifstream fin("coarse_mesh.vtk"); 
>> gridin.read_vtk(fin); 
>> fin.close(); 
>> std::cout << "read mesh done" << std::endl; 
>> } 
>>
>> ...then you will need a MPI_Barrier between the two blocks to make sure 
>> that 
>> process 1 doesn't start reading the file before process 0 has completed 
>> writing it. 
>>
>> Best 
>> W. 
>>
>> -- 
>> ------------------------------------------------------------------------ 
>> Wolfgang Bangerth email: bang...@colostate.edu 
>> www: http://www.math.colostate.edu/~bangerth/ 
>>
>>
>>

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/dealii/97d7d680-43f9-433a-8192-9c02c1b8265an%40googlegroups.com.

Reply via email to