Ben,
You may try to disable registration cache, it may relieve pressure on memory
resources.
--mca mpi_leave_pinned 0
You may find a bit more details here:
http://www.open-mpi.org/faq/?category=openfabrics#large-message-leave-pinned
Using the option you may observe drop in BW performance.
R
On Wed, 10 Jul 2013, Ralph Castain wrote:
And as was pointed out in a followup email, this problem was corrected in
1.6.5. I was using 1.6.4
Thanks!
Tim
Yeah, we discussed taking things from your thread, plus the wiki page on
cross-compiling OMPI, and creating a new FAQ area. I'll do so - t
Although this particular bug should be fixed in 1.6.5 and 1.7.2; which
version of Open MPI are you using?
Brian
On 7/10/13 10:29 AM, "Ralph Castain" wrote:
>Yeah, we discussed taking things from your thread, plus the wiki page on
>cross-compiling OMPI, and creating a new FAQ area. I'll do so -
Yeah, we discussed taking things from your thread, plus the wiki page on
cross-compiling OMPI, and creating a new FAQ area. I'll do so - thanks!
On Jul 10, 2013, at 9:14 AM, Tim Carlson wrote:
> I've polluted the previous thread on GPU abilites with so much Intel/Phi bits
> that I decided a fe
I've polluted the previous thread on GPU abilites with so much Intel/Phi
bits that I decided a few new threads might be a good idea. First off I
think the following could be a FAQ entry.
If you have cluster with Phi cards and are using the SCIF interface with
OFED, OpenMPI between two hosts (