On 1/17/11 8:42 PM, Blair Zajac wrote:
Questions:

1) Is there a better algorithm than exponential sleeps for a resource when you
need to explicitly try to get the resource? I've noticed that having a slow and
a fast Linux client trying to do as many commits per second, the fast one locks
out the slow one, so the slow one ends up sleeping a lot more. I'm thinking of
using a random sleep between 1 and 100ms, where 100ms is an average commit time.

Comparing the two techniques, all times measured in seconds.

=== Exponential backoff, starting at 1ms to a max of 25 ms

N       is 1300
sum     is 570.600128889067
mean    is 0.438923176068513
SD N    is 1.06623585913897
SD N-1  is 1.06664618659659
min     is 0.150957107544
25%     is 0.202463150024
50%     is 0.22247505188
90%     is 0.460783004761
95%     is 1.23124289513
99%     is 5.72130203247
max     is 15.1393589973

Percentage of commits by slow systems: 11.9
Percentage of commits by fast systems: 88.0

=== Random sleep between 0ms and 100ms

N       is 1423
sum     is 656.073898077029
mean    is 0.461049822963478
SD N    is 0.439096669094279
SD N-1  is 0.439251036006798
min     is 0.144687891006
25%     is 0.210839986801
50%     is 0.252903938293
90%     is 0.975655078888
95%     is 1.34893894196
99%     is 2.11821913719
max     is 3.73444414139

Percentage of commits by slow systems: 44.5
Percentage of commits by fast systems: 55.5


So random sleeps gives overall better predictability, smaller variance, fairness between fast and slow systems and much lower worst case times.

The random implementation seems a little harder, for a portable one, possible implementations:

1) apr_generate_random_bytes()
   On Unix, this opens DEV_RANDOM, which seems heavy for this issue.

2) apr_md5_init()
   Hash the current time and lock file path would be good enough, then xor the
   high 8 bytes into the low 8 bytes and then modulo 100ms.

Blair

Reply via email to