> With CFM and other tunnel monitoring protocols, having a fairly precise
> time is good.  My measurements don't show this change increasing CPU use.
> (In fact it appears to repeatably reduce CPU use slightly, from about
> 22% to about 20% with 1000 CFM instances, although it's not obvious why.)
>

That's surprising.  I wonder if it's reducing our likelihood of doing an
immediate poll loop wake due to inaccurate timers.  Anyways, looks good.

Acked-by: Ethan Jackson <[email protected]>




> Signed-off-by: Ben Pfaff <[email protected]>
> ---
>  lib/timeval.h |    2 +-
>  1 files changed, 1 insertions(+), 1 deletions(-)
>
> diff --git a/lib/timeval.h b/lib/timeval.h
> index d5c12f0..72cf498 100644
> --- a/lib/timeval.h
> +++ b/lib/timeval.h
> @@ -43,7 +43,7 @@ BUILD_ASSERT_DECL(TYPE_IS_SIGNED(time_t));
>  /* Interval between updates to the reported time, in ms.  This should not
> be
>   * adjusted much below 10 ms or so with the current implementation, or too
>   * much time will be wasted in signal handlers and calls to
> clock_gettime(). */
> -#define TIME_UPDATE_INTERVAL 100
> +#define TIME_UPDATE_INTERVAL 25
>
>  /* True on systems that support a monotonic clock.  Compared to just
> getting
>   * the value of a variable, clock_gettime() is somewhat expensive, even on
> --
> 1.7.2.5
>
> _______________________________________________
> dev mailing list
> [email protected]
> http://openvswitch.org/mailman/listinfo/dev
>
_______________________________________________
dev mailing list
[email protected]
http://openvswitch.org/mailman/listinfo/dev

Reply via email to