> On Apr 22, 2019, at 2:07 AM, Mattias Rönnblom <mattias.ronnb...@ericsson.com> > wrote: > > On 2019-04-22 06:33, Wiles, Keith wrote: > >>> From a performance point of view, the high-loop-count cases are rare enough >>> not to pose a serious threat. For example, being forced to redo rte_rand() >>> more than five times is only a ~3% risk. >> Even a few loops can have an effect on performance when we are talking about >> micro-seconds plus it leads to indeterminate results. The numbers you >> reported here are interesting, but I would be happier if you added a limit >> to the loop. If you state the likely hood of doing 5 loops is only 3% then >> adding a loop limit would be reasonable, right? > > Probability is already effectively putting a limit to the loop. The risk of > being stuck for >1us is p=~6e-73. The variations in execution times will in > most cases be less than a LLC miss. > > A loop variable will not have any effect on performance, pollute the code, > and hurt uniformity. > > Here's what rte_rand_max() performance looks like on my Skylake. > > Average rte_rand_max() latency with worst-case upper_bound: > rte_rand_max() w/o loop limit: 47 cc > rte_rand_max() w/ max 8 retries: 49 cc > rte_rand_max() w/ max 4 retries: 47 cc > rte_rand_max() w/ max 2 retries: 40 cc > > So you need to be very aggressive in limiting the loop count for that loop > variable to pay off. Otherwise, you will just be at a loss, doing all that > bookkeeping which very rarely turns out to be useful.
The adding of a loop counter is not a performance problem (as you stated only a few cycles are added) and it does not hurt the readability of the code. As for hurting uniformity I do not see that being a problem, as most loops have a visible way to terminate the loop instead of relying on a random number generator to give a valid value in a few loops. It is only reasonable IMO to limit the loop. But I will let it go and let you add this code to DPDK if someone else wants to ACK it. Regards, Keith