With the increasing gap between network bandwidth and processor computing
power, the current trend in linear algebra is toward communication avoiding
algorithms (aka. replacing communications with redundant computations). You're
taking the exact opposite path, I wonder if you can get any benefit
Ashley Pittman pittman.co.uk> writes:
> MPI_Comm_split() is an expensive operation, sure the manual says it's low cost
but it shouldn't be used
> inside any critical loops so be sure you are doing the Comm_Split() at startup
and then re-using it as and
> when needed.
>
> Any blocking call into O
On Nov 2, 2010, at 6:21 AM, Jerome Reybert wrote:
> Each host_comm communicator is grouping tasks by machines. I ran this version,
> but performances are worst than the current version (each task performing its
> own Lapack function). I have several questions:
> - in my implementation, is MPI_Bc
On 2 Nov 2010, at 10:21, Jerome Reybert wrote:
> - in my implementation, is MPI_Bcast aware that it should use shared memory
> memory communication? Is data go through the network? It seems it is the case,
> considering the first results.
> - is there any other methods to group task by machine,