Dear Wolfgang,

Thank you for your reply. I understand that DD methods are not quite that 
popular now. But I wanted to explore the avenue keeping in mind the issue 
of scalability as well. For scalability, as you mention communication is 
probably the bottleneck. A few papers have worked towards the idea of 
Asynchronous methods where there would be no synchronization points which 
could potentially have large benefits. There have been a few studies with 
asynchronous Schwarz methods (overlapping and non-overlapping) whose 
performance has been better than those for their Synchronous counterparts. 
Granted that their applicability in terms of rate of convergence (for the 
linear system solution) or the physical problem at hand is currently 
limited, I think it could be interesting at the Exascale level particularly 
with adaptive meshing.

It would be very helpful if you could provide me with some pointers for 
where and how to change the p4est functions in deal.II.

Thanks,
Pratik.

On Monday, April 2, 2018 at 8:44:58 PM UTC+2, Wolfgang Bangerth wrote:
>
>
> Pratik, 
>
> > deal.II has the parallel::distributed::Triangulation class which 
> > provides the functionality to partition and distribute the meshes using 
> > p4est's partition and adaptive space filling techniques. According to 
> > what I have understood from step-40, using the 
> > parallel::distributed::Triangulation , one can automatically partition 
> > the domain into equal (load balanced) parts and distribute the mesh to 
> > the separate processes so that no single process needs to have the 
> > entire mesh (except the initial coarse mesh). I would like to extend 
> > this to have an overlapping decomposition such that each process has a 
> > certain overlap with the neighboring processes and maybe impose boundary 
> > conditions on these nodes. 
> > 
> > I understand that in step-40 there are ghost cells which are actually 
> > owned by the neighboring processes but serve some similar aspects. But, 
> > as told here ( 
> > 
> https://groups.google.com/forum/#!searchin/dealii/overlap|sort:date/dealii/e-V2ZaPed1c/WMsZGtT2wWkJ
>  
> > ) it seems you cannot control the width and I am not sure if one can 
> > impose boundary conditions on them. 
> > 
> > Using METIS, I could probably use the partMesh_Kway and then do a 
> > breadth wise inclusion of the neighboring nodes based on the overlap, 
> > but I am not sure how I can accomplish this using p4est in deal.II. 
> > 
> > In short, using p4est, I would like to have an overlapping decomposition 
> > on a parallel distributed case where I could impose boundary conditions 
> > on the said overlapped boundary nodes. These overlapped nodes would be a 
> > part of both the current process and the neighboring process. 
> > 
> > Any suggestions or alternative recommendations would be really helpful. 
>
> You are correct that deal.II currently only allows one layer of ghost 
> cells around the locally owned region. I believe that this could be 
> changed, however, given that p4est (to the best of my knowledge) allows 
> to change this. It would be a bit of work figuring out where in deal.II 
> one would need to call the corresponding functions of p4est, but I 
> imagine that is feasible if you wanted to dig around a bit. (We'd be 
> happy to provide you with the respective pointers.) 
>
> The bigger problem is that with the approach you suggest, you would have 
> to enumerate degrees of freedom that are live on each processor 
> independently from the global numbering, so that you can build the 
> linear systems on each processors subdomain plus layers of ghost cells. 
> There is no functionality for this at the moment. I suspect that you 
> could build this as a simple map from global DoFs to local DoFs, though, 
> and so that would likely also be feasible. 
>
>
> I think the question I would ask you is why you want to do this? I know 
> that overlapping domain decomposition methods were popular in the 1990s 
> and early 2000s, primarily because it allowed to re-use existing, 
> sequential codes: each processor simply has to solve their own, local, 
> PDE, and all communication is restricted to exchanging boundary value 
> information. But we know today that (i) this does not scale very well to 
> large numbers of processors, and (ii) global space methods where you 
> solve one large linear system across many processors, as we do in 
> step-40 for example, is a much better method. In other words, the reason 
> why there is currently little support for overlapping DD methods in 
> deal.II is because as a community, we have recognized that these methods 
> are not as good as others that have been developed over the last 20 years. 
>
> Best 
>   W. 
>
> -- 
> ------------------------------------------------------------------------ 
> Wolfgang Bangerth          email:                 bang...@colostate.edu 
> <javascript:> 
>                             www: http://www.math.colostate.edu/~bangerth/ 
>

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Reply via email to