Ashesh, On Tue, Dec 19, 2017 at 12:30:10PM -0500, Ashesh Mishra wrote: > It really depends on the use-case and the implementation. This measurement > may be excessive if running at a 3.3ms or 10ms interval, but you don’t run > these intervals on anything but the best and most deterministic of links. For > links with higher or unpredictable latency, the typical intervals are at > least 50msec (typically between 100msec and 500msec). At those rates, the > overhead is not significant. > > At the same time, with more software implementations coming to the market, > the overhead is smaller compared to the hardware implementations as there is > no additional offload to a different engine.
Given that this may run against unknown software or hardware based implementations, how do you foresee the procedures working to avoid swamping a slow implementation? For example, what's the impact of missing one or more packets carrying this information within the Detection multiplier of packets allowed to be dropped? -- Jeff