On Fri, Aug 30, 2013 at 11:20:54AM -0700, Alex Wang wrote: > > After this patch, coverage_read() seems halfway between two designs. > > My goal was to avoid holding coverage_mutex while constructing the log > > message, to keep the hold time low, by copying out the totals and then > > building the message with the copies. Your version retains the > > copy-out for totals but extends the mutex hold time across the log > > messages anyway since it needs access to new data. I'd suggest doing > > the summations holding the lock but then dropping it before > > constructing the log mesage. > > > > Yes, I'd very much like to do this. During my poking-around these two days, > I know lock contention better.
OK, great. > > I think that the units might be a little confusing. All of the units > > are really per second, right? It's just that they are per second over > > the last second, per second averaged over the last hour, per second > > averaged over the last day. (Right?) We might want to clarify that. > > I am not sure how to make it really clear in the coverage/show output, > > but writing the results in some way other than "/min", "/hr" might > > help, since people naturally read that as per-minute or per-hour. > > > > No. We sample every 5 seconds. Put the sample into the minute moving > averager. So when calculating the per-minute rate, we sum all the 5-second > counts in the minute moving averager (12 5-sec slots = 1 minute count). > The per-hour rate use the similar idea. Ah. I understand now. But I'm not sure that this is actually a good way to do it, because it makes the output harder to interpret. If I see that the rate over the last 5 seconds is 123/s, for the last minute is 4920/min, and for the last hour is 19680/h, then I have a hard time figuring out what that means. But if I see that the rate over the last 5 seconds is 123/s, over the last minute is 82.0/s, and over the last hour is 5.4/s (which are the same rates but per-second rather than per-minute or per-hour) then I can immediately see that the rates have recently spiked. _______________________________________________ dev mailing list dev@openvswitch.org http://openvswitch.org/mailman/listinfo/dev