On Jun 13, 2006, at 11:07 AM, Brock Palen wrote:
Here are the results with -mca mpi_leave_pinned
The results are as expected, they are exactly similar to the mpich
results. Thank you for the help. I have attached a plot with all
three and the raw data for anyones viewing pleasure.
We ne
Here are the results with -mca mpi_leave_pinned
The results are as expected, they are exactly similar to the mpich
results. Thank you for the help. I have attached a plot with all
three and the raw data for anyones viewing pleasure.
Im still curious about does mpi_leave_pinned affect real
Well, if there is no reuse in the application buffers, then the 2
approaches will give the same results. Because of our pipelined
protocol it might happens that we will reach even better performance
for large messages. If there is buffer reuse, the mpich-gm approach
will lead to better perf
Ill provide new numbers soon with the --mac mpi_leave_pinned 1
I'm currious how does this affect real application performace? This
ofcourse is a synthetic test using NetPipe. For regular apps that
move decent amounts of data but want low latency more.
Will that be affected?
Brock Palen
Ce
Unlike mpich-gm, Open MPI does not keep the memory pinned by default.
You can force this by ading the "--mca mpi_leave_pinned 1" to your
mpirun command or by adding it into the Open MPI configuration file
as specified on the FAQ (section performance). I think that should be
the main reason
Hi Brock,
You may wish to try running with the runtime option:
-mca mpi_leave_pinned 1
This turns on registration caching and such..
- Galen
On Jun 13, 2006, at 8:01 AM, Brock Palen wrote:
I ran a test using openmpi-1.0.2 on OSX vs mpich-1.2.6 from
mryicom and i get lacking results from