Dave Love <d.l...@liverpool.ac.uk> writes:
> PETSc can't be using MPI-3 because I'm in the process of fixing rpm
> packaging for the current version and building it with ompi 1.6.

It would be folly for PETSc to ship with a hard dependency on MPI-3.
You wouldn't be able to package it with ompi-1.6, for example.  But that
doesn't mean PETSc's configure can't test for MPI-3 functionality and
use it when available.  Indeed, it does (though for different capability
than mentioned in this thread).

> (Exascale is only of interest if when are spins-off useful for
> university-scale systems.)  I was hoping for a running example.

The relevant example for the technique mentioned in this thread is in
src/ksp/ksp/examples/tests/benchmarkscatters of the 'master' versus
'barry/utilize-hwloc' branches.  It's completely experimental at this
time.

Attachment: signature.asc
Description: PGP signature

Reply via email to