Dear everyone,

I have a question on tagged messages and the best way to generate tags. A while 
back we went through the whole library and consistently tagged all of our MPI 
point-to-point messages - there was a long discussion about how to do this in

https://github.com/dealii/dealii/issues/8958

and some cited follow-up pull requests. This strategy is nice because it scales 
and we can reuse the same communicator for everything. A drawback is that there 
are some hard limits on the number of times we can call a particular _start() 
function before calling its corresponding _finish() function (e.g., with the 
same communicator, we can only do 10 ghost updates inside LA::d::Vector 
concurrently).

I'm working on a problem where I want to do this communication inside some 
library code that will potentially be called many, many times (about 20 at once 
for one situation my code's predecessor deals with). Hence I want to work 
around these hard-coded limits in some way that does not require the caller to 
set tags, batch things, etc.

One fix that I have found (PETSc does this) is to assign every object its own 
duplicated communicator which can then keep track of its own tags with MPI's 
own get and set attributes functions.

Hence, the question: has anyone else here encountered a similar problem where 
they want to post potentially dozens of ghost updates or other communications 
at once? If so - are there better solutions than duplicating communicators? Do 
you happen to know if there are any downsides to making lots of MPI 
communicators? I'd appreciate hearing back from anyone with experience with 
this sort of MPI problem.

Best,
David Wells

-- 
The deal.II project is located at http://www.dealii.org/
For mailing list/forum options, see 
https://groups.google.com/d/forum/dealii?hl=en
--- 
You received this message because you are subscribed to the Google Groups 
"deal.II User Group" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to dealii+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/dealii/BN7PR03MB4356CE8442C84CDB32460A5CED859%40BN7PR03MB4356.namprd03.prod.outlook.com.

Reply via email to