Hi Wolfgang, 

Thank you for your prompt reply. I was wondeirng if I can integrate other 
partition tools, such as metis or parmetis to handle the fully distributed 
triangulation. I can develop that part by myself (or with some help from 
the community). Do you have any suggestions? My following project also 
relies on this since I will try to manage number of cells in each 
processor. With p4est, it is hard to manage number of cells based on a 
in-house algorithm. My application is about IC designs that may have 
million to billion cells. A fully distributed triangulation helps to reduce 
memory usage. The current shared_memory system can handle 20M (single core) 
in system of 32GB main memory.

Any design of 1M cells on the distributed triangulation have problem of 
computation time because of the reorder step. This is why I bypassed it and 
provided a sorted mesh to grid_in (read_msh). For a problem of 5M cells, I 
can save 200sec at the step of create_triangulation. 

Regards
YC Chen
On Tuesday, January 31, 2017 at 3:08:10 PM UTC, Wolfgang Bangerth wrote:
>
>
> YC, 
>
> > I have a project requiring to read in a large coarse mesh from gmsh to 
> deal.ii 
> >> 1M dofs. Most of cells have their own characteristics, which means I 
> cannot 
> > combine them and create a coarse mesh. 
> > Currently, I implemented it by using shared memory triangulation for 
> > parallelization. Since I want to scale it to a cluster system and target 
> a 
> > 100M mesh (no need for mesh refinement), I have to use distributed tria 
> via 
> > MPI (is there any better solution?). I found out that the initial cost 
> is 
> > large because of the duplication of triangulation and p4est forest. I 
> was 
> > wondering if there is any method to remove part of triangulation or 
> p4est 
> > data. 
>
> No, unfortunately there is not. p4est is built on the assumption that the 
> coarse mesh is replicated on all processors, and deal.II inherits this 
> assumption. If your coarse mesh has 1M cells, that may just barely so be 
> tolerable, although it will likely lead to inefficient code in some places 
> where you loop over all cells and almost all of them turn out to be 
> artificial. But I suspect that you will be in serious trouble if your 
> coarse 
> mesh has 100M cells. 
>
> You should really try to come up with a coarser coarse mesh that you can 
> then 
> refine. 
>
> Best 
>   W. 
>
> -- 
> ------------------------------------------------------------------------ 
> Wolfgang Bangerth          email:                 bang...@colostate.edu 
> <javascript:> 
>                             www: http://www.math.colostate.edu/~bangerth/ 
>
>

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Reply via email to