Pascal,
I don't think anyone responded to your email here:

> I will try to be as short as possible

That was only moderately successful ;-))


> - if more details are required 
> feel free to ask. Also I offer to submit all mesh generation code I 
> create in the future, since others might have similar needs at some point.

We would of course love to include this!


> I work on a 3d mesh with purely axis-parallel edges. The mesh is a 
> 2d-mesh (say in x and y direction) extruded to 3d (z-direction). Due to 
> the scheme I use it is required, that the distributed version of this 
> mesh be distributed as intervals along the z-axis ( process 0 has all 
> cells with cell->center()[2] in [0,1], process 1 has all cells with 
> cell->center()[2] in (1,2] and so on.)
> What I did originally was simply generating a mesh consisting of 
> n_processes cells, let that mesh be auto partitioned, then turning 
> partitioning off

Out of curiosity, how do you do this?


> So my questions would be:
> 1. given a 2D-mesh, what is the easiest way to get a distributed 
> 3d-extrusion with the partitioning described above?  (auto-partitioning 
> generates balls in this mesh, not cuboids with z-orthogonal 
> process-to-process interfaces) One suggestion for a function here would 
> be a function to transform a shared to a distributed mesh because on a 
> shared mesh I could set the subdomain Ids and then just call that 
> function when I'm done

You can't do this for a parallel::distributed::Triangulation. That's 
because the partitioning is done in p4est and p4est orders cells in the 
depth-first tree ordering along a (space filling) curve and then just 
subdivides it by however many processors you have.

The only control you have with p4est is how much weight each cell has in 
this partition, but you can't say that cutting between cells A and B 
should be considered substantially worse than cutting between cells C 
and D -- which is what you are saying: Cutting cells into partitions is 
totally ok if the partition boundary runs between copies of the base 2d 
mesh, but should be prohibited if the cut is between cells within a copy.

Other partitioning algorithms allow this sort of thing. The partition 
graphs (where each node is a cell, and an edge the connection between 
cells). Their goal is to find partitions of roughly equal weight (where 
the weight of a partition is the sum of its node weights) while trying 
to minimize the cost (where the cost is the sum of the edge weights over 
all cut edges). If you were to assign very small weights to all edges 
between cells of different copies, and large weights to edges within a 
copy, then you would get what you want.

It should not be terribly difficult to do this with 
parallel::shared::Triangulation since there we use regular graph 
partitioning algorithms. But I don't see how it is possible for 
parallel::distributed::Triangulation.


> 2. say I have a layer of faces in that mesh (in the interior) and I need 
> nedelec elements and nodal elements on these faces (codimension 1) to 
> evaluate their shape function and gradients in quadrature points of a 
> 2D-quadrature on the faces. What is the best way to do this? (if it was 
> a boundary I could give a boundary id and call 
> GridGenerator::extract_boundary_mesh but it's not always on the boundary.

I don't think it would be terribly difficult to write the equivalent of 
that function that extracts a surface mesh based on faces have some 
particular property (not necessarily on the boundary of the domain). 
Take a look at the existing function and see whether you can easily 
adapt it for your purposes.


> 3. There is a description on how to manually generate a mesh which seems 
> easy enough in my case. How does this work for a distributed mesh? Is 
> the only version to generate a mesh and then auto-partition or can I 
> somehow define the partitioning in the generation phase similar to the 
> way I could set subdomain Ids in a shared parallel mesh?

No, you can't. For both parallel::distributed::Triangulation and 
parallel::shared::Triangulation, all processors store the entire coarse 
mesh. There is of course a difference is how they are then partitioned 
into locally owned, ghost, and artificial cells. p::d::Triangulation 
doesn't give you much leverage in how this is done (other than the cell 
weights) whereas for a p::s::Triangulation, you can basically partition 
the entire mesh as you please.

Best
  W.


-- 
------------------------------------------------------------------------
Wolfgang Bangerth          email:                 bange...@colostate.edu
                            www: http://www.math.colostate.edu/~bangerth/

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Reply via email to