I tried to run with the first dynamic rules file that Pavel proposed
and it works, the time per one MD step on 48 cores decreased from 2.8
s to 1.8 s as expected. It was clearly the basic linear algorithm that
was causing the problem. I will check the performance of bruck and
pairwise on my HW. It would be nice if it could be tuned further.

Thanks

Roman

On Wed, May 20, 2009 at 7:18 PM, Pavel Shamis (Pasha) <pash...@gmail.com> wrote:
> Tomorrow I will add some printf to collective code and check what really
> happens there...
>
> Pasha
>
> Peter Kjellstrom wrote:
>>
>> On Wednesday 20 May 2009, Pavel Shamis (Pasha) wrote:
>>
>>>>
>>>> Disabling basic_linear seems like a good idea but your config file sets
>>>> the cut-off at 128 Bytes for 64-ranks (the field you set to 8192 seems
>>>> to
>>>> result in a message size of that value divided by the number of ranks).
>>>>
>>>> In my testing bruck seems to win clearly (at least for 64 ranks on my
>>>> IB)
>>>> up to 2048. Hence, the following line may be better:
>>>>
>>>>  131072 2 0 0 # switch to pair wise for size 128K/nranks
>>>>
>>>> Disclaimer: I guess this could differ quite a bit for nranks!=64 and
>>>> different btls.
>>>>
>>>
>>> Sounds strange for me. From the code is looks that we take the threshold
>>> as
>>> is without dividing by number of ranks.
>>>
>>
>> Interesting, I may have had to little or too much coffe but the figures in
>> my previous e-mail (3rd run, bruckto2k_pair) was run with the above line.
>> And it very much looks like it switched at 128K/64=2K, not at 128K (which
>> would have been above my largest size of 3000 and as such equiv. to
>> all_bruck).
>>
>> I also ran tests with:
>>  8192 2 0 0 # ...
>> And it seemed to switch between 10 Bytes and 500 Bytes (most likely then
>> at 8192/64=128).
>>
>> My testprogram calls MPI_Alltoall like this:
>>  time1 = MPI_Wtime();
>>  for (i = 0; i < repetitions; i++) {
>>    MPI_Alltoall(sbuf, message_size, MPI_CHAR,
>>                 rbuf, message_size, MPI_CHAR, MPI_COMM_WORLD);
>>  }
>>  time2 = MPI_Wtime();
>>
>> /Peter
>>  ------------------------------------------------------------------------
>>
>> _______________________________________________
>> users mailing list
>> us...@open-mpi.org
>> http://www.open-mpi.org/mailman/listinfo.cgi/users
>
> _______________________________________________
> users mailing list
> us...@open-mpi.org
> http://www.open-mpi.org/mailman/listinfo.cgi/users
>

Reply via email to