On Wed, 24 Nov 2010, Gleb Smirnoff wrote:
Log:
MFhead r214508:
Revert a small part of the r198301, that is entirely unrelated to the
r198301 itself. It also broke the logic of not sending more than one
ARP request per second, that consequently lead to a potential problem
of flooding network with broadcast packets.
Modified:
stable/8/sys/netinet/if_ether.c
Modified: stable/8/sys/netinet/if_ether.c
==============================================================================
--- stable/8/sys/netinet/if_ether.c Wed Nov 24 05:24:36 2010
(r215790)
+++ stable/8/sys/netinet/if_ether.c Wed Nov 24 05:37:12 2010
(r215791)
@@ -381,7 +381,7 @@ retry:
int canceled;
LLE_ADDREF(la);
- la->la_expire = time_second + V_arpt_down;
+ la->la_expire = time_second;
canceled = callout_reset(&la->la_timer, hz * V_arpt_down,
arptimer, la);
if (canceled)
Isn't using non-monotic time for timeouts always wrong?
There are lots of other time_second's in networkining code. These
still outnumber time_uptime's by about 68:41. rtcock.c uses the weird
expression time_second - time_uptime for metrics. Since time_uptime
is relative to boot time while time_second is relative to the Epoch,
their difference is approximately the number of seconds since the
Epoch, which is a very strange value which might nevertheless be
useful for converting between monotonic expiry times and real expiry
times, but I think it doesn't work even for that if the real time is
stepped.
Bruce
_______________________________________________
svn-src-all@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/svn-src-all
To unsubscribe, send any mail to "svn-src-all-unsubscr...@freebsd.org"