On 2019-04-22 06:33, Wiles, Keith wrote:

 From a performance point of view, the high-loop-count cases are rare enough 
not to pose a serious threat. For example, being forced to redo rte_rand() more 
than five times is only a ~3% risk.

Even a few loops can have an effect on performance when we are talking about 
micro-seconds plus it leads to indeterminate results. The numbers you reported 
here are interesting, but I would be happier if you added a limit to the loop. 
If you state the likely hood of doing 5 loops is only 3% then adding a loop 
limit would be reasonable, right?


Probability is already effectively putting a limit to the loop. The risk of being stuck for >1us is p=~6e-73. The variations in execution times will in most cases be less than a LLC miss.

A loop variable will not have any effect on performance, pollute the code, and hurt uniformity.

Here's what rte_rand_max() performance looks like on my Skylake.

Average rte_rand_max() latency with worst-case upper_bound:
rte_rand_max() w/o loop limit: 47 cc
rte_rand_max() w/ max 8 retries: 49 cc
rte_rand_max() w/ max 4 retries: 47 cc
rte_rand_max() w/ max 2 retries: 40 cc

So you need to be very aggressive in limiting the loop count for that loop variable to pay off. Otherwise, you will just be at a loss, doing all that bookkeeping which very rarely turns out to be useful.

Reply via email to