Dear George,
First of all, many thanks for your quick response.
George Bosilca wrote:
The btl_tcp_sndbuf and btl_tcp_rcvbuf are limited by the kernel
(usually 128K), so there is no reason to set them to something huge,
if the kernel is unable to support these values.
In fact, we did change these values in the kernel. The reason why we use
this huge values is that we work on a grid (Grid5000) where both the
latency and the bandwidth are very high.
The eager didn't get modified between 1.1 and 1.2, so it should work
as expected.
Please find below the results we obtain in the same configuration with
ompi 1.1.4 and ompi 1.2.6 respectively using the pingpong of the IMB.
You'll remark a change of the bandwidth response around 64KB. We think
this is due to the use of the rendez-vous mode when messages are bigger
than 64KB. That's why we try to change the eager limit. It worked fine
with 1.1.4 version using the btl_tcp_eager_limit command line paramteter
but it fails with 1.2.6.
Any idea is welcome.
Thanks in advance.
#---------------------------------------------------
# Intel (R) MPI Benchmark Suite V3.0, MPI-1 part
#---------------------------------------------------
# Date : Wed Apr 30 16:08:18 2008
# Machine : x86_64
# System : Linux
# Release : 2.6.24
# Version : #2 SMP Fri Apr 4 16:06:49 CEST 2008
# MPI Version : 2.0
# MPI Thread Environment: MPI_THREAD_SINGLE
#
# Minimum message length in bytes: 0
# Maximum message length in bytes: 16777216
#
# MPI_Datatype : MPI_BYTE
# MPI_Datatype for reductions : MPI_FLOAT
# MPI_Op : MPI_SUM
#
#
# List of Benchmarks to run:
# PingPong
#---------------------------------------------------
# Benchmarking PingPong
# #processes = 2
#---------------------------------------------------
#bytes #repetitions t[usec] Mbytes/sec
0 50 5830.25 0.00
1 50 5831.46 0.00
2 50 5831.77 0.00
4 50 5831.37 0.00
8 50 5831.56 0.00
16 50 5831.36 0.00
32 50 5831.37 0.01
64 50 5833.24 0.01
128 50 5836.28 0.02
256 50 5841.19 0.04
512 50 5850.76 0.08
1024 50 5870.26 0.17
2048 50 5898.54 0.33
4096 50 5926.43 0.66
8192 50 5952.72 1.31
16384 50 6029.06 2.59
32768 50 6173.81 5.06
65536 50 6460.70 9.67
131072 50 18677.23 6.69
262144 50 19804.31 12.62
524288 50 22036.95 22.69
1048576 50 26493.17 37.75
2097152 50 35400.57 56.50
4194304 50 53235.14 75.14
8388608 50 88914.23 89.97
16777216 50 164841.80 97.06
------------------------------------------------------------------------
#---------------------------------------------------
# Intel (R) MPI Benchmark Suite V3.0, MPI-1 part
#---------------------------------------------------
# Date : Wed Apr 30 16:15:27 2008
# Machine : x86_64
# System : Linux
# Release : 2.6.24
# Version : #2 SMP Fri Apr 4 16:06:49 CEST 2008
# MPI Version : 2.0
# MPI Thread Environment: MPI_THREAD_SINGLE
#
# Minimum message length in bytes: 0
# Maximum message length in bytes: 16777216
#
# MPI_Datatype : MPI_BYTE
# MPI_Datatype for reductions : MPI_FLOAT
# MPI_Op : MPI_SUM
#
#
# List of Benchmarks to run:
# PingPong
#---------------------------------------------------
# Benchmarking PingPong
# #processes = 2
#---------------------------------------------------
#bytes #repetitions t[usec] Mbytes/sec
0 50 5828.42 0.00
1 50 5829.40 0.00
2 50 5829.49 0.00
4 50 5829.38 0.00
8 50 5829.56 0.00
16 50 5829.86 0.00
32 50 5830.40 0.01
64 50 5832.73 0.01
128 50 5836.62 0.02
256 50 5841.15 0.04
512 50 5850.61 0.08
1024 50 5869.23 0.17
2048 50 5896.90 0.33
4096 50 5921.50 0.66
8192 50 5953.92 1.31
16384 50 6025.42 2.59
32768 50 6172.76 5.06
65536 50 6459.50 9.68
131072 50 7037.10 17.76
262144 50 8216.81 30.43
524288 50 10614.90 47.10
1048576 50 15415.35 64.87
2097152 50 25000.11 80.00
4194304 50 44130.78 90.64
8388608 50 82288.79 97.22
16777216 50 158273.73 101.09
george.
On Apr 28, 2008, at 1:20 PM, jean-christophe.mig...@ens-lyon.fr wrote:
Hi all,
We're using a pingpong in order to measure the bandwidth and latency
available with open MPI.
In our first experiments done with the 1.1.4 version, we were using the
btl_tcp_eager_limit parameter to modify the eager limit. We've upgraded
to the 1.2.6 version and the limit parameter we fix doesn't seem to be
taken in account. The value we want to use is 67108864. The command
line is:
mpirun -np 2 -machinefile node_file -mca btl_tcp_sndbuf 4194304 -mca
btl_tcp_rcvbuf 4194304 -mca btl_tcp_eager_limit 67108864 pingpong.
Is this parameter still useful (ompi_info shows that this parameter is
still available) ?
Does anybody have any idea ?
Thanks in advance.
JC Mignot and Ludovic Hablot
_______________________________________________
users mailing list
us...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/users
------------------------------------------------------------------------
_______________________________________________
users mailing list
us...@open-mpi.org
http://www.open-mpi.org/mailman/listinfo.cgi/users