Sent from my iPhone

> On Apr 21, 2019, at 2:05 PM, Mattias Rönnblom <mattias.ronnb...@ericsson.com> 
> wrote:
> 
> On 2019-04-20 23:08, Wiles, Keith wrote:
> 
>>> +uint64_t __rte_experimental
>>> +rte_rand_max(uint64_t upper_bound)
>>> +{
>>> +    uint8_t zeros;
>>> +    uint64_t mask = ~((uint64_t)0);
>>> +    uint64_t res;
>>> +
>>> +    if (upper_bound < 2)
>>> +        return 0;
>>> +
>>> +    zeros = __builtin_clzll(upper_bound);
>>> +    mask >>= zeros;
>>> +
>>> +    do {
>>> +        res = rte_rand() & mask;
>>> +    } while (unlikely(res >= upper_bound));
>> My concern here is the numbers of times this loop could be executed as the 
>> upper bound could be just over a power of 2 and it is a large number meaning 
>> the number of values above upper max and the power of 2 could be huge. Am I 
>> looking this loop correctly. If my thought process is correct it could take 
>> a long time to get a value less tha n upper max, correct?
> 
> If the number of values over the limit is huge, so is the value range below 
> it.
> 
>> If every thing aligns in a bad way it could be a large number of loops and 
>> cause all kinds of problems. What could be done here or is this a non-issue?
> 
> Formally, the execution has no upper bound, but in practice I think this is a 
> non-issue.
> 
> The cycle cost of an iteration of the loop (including the rte_rand() call) is 
> ~10 cc. With the worst-case upper_limit, the probability of having to redo 
> the rte_rand() call is ~0.5, for every iteration. This is with a 
> pseudo-random number generator featuring an uniform distribution.
> 
> Let's assume we care about determinism, and we have a system which suffers a 
> catastrophic failure if the rte_rand_max() latency is more than 10k clock 
> cycles. The system performs 10M rte_rand_max() calls per second.
> 
> The probability of this system running for a year without any 
> rte_rand_max()-related crashes is:
> 0.999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999970568598526
> 
> So crashes due to excessive rte_rand_max() latency may occur, but this risk 
> is dwarfed by that of the system being hit by an asteroid.
> 
> (Btw, I doubt even the people in our sales force have seen that many nines 
> before.)
> 
> From a performance point of view, the high-loop-count cases are rare enough 
> not to pose a serious threat. For example, being forced to redo rte_rand() 
> more than five times is only a ~3% risk.

Even a few loops can have an effect on performance when we are talking about 
micro-seconds plus it leads to indeterminate results. The numbers you reported 
here are interesting, but I would be happier if you added a limit to the loop. 
If you state the likely hood of doing 5 loops is only 3% then adding a loop 
limit would be reasonable, right?

Reply via email to