Il 20/10/2013 15:56, Samuel Thibault ha scritto:
> +#define rand_a_b(a, b)\
> +    (rand()%(int)(b-a)+a)
> +#define NDP_Interval rand_a_b(NDP_MinRtrAdvInterval, NDP_MaxRtrAdvInterval)
> +
> +static void ra_timer_handler(void *opaque)
> +{
> +    timer_mod(ra_timer, qemu_clock_get_s(QEMU_CLOCK_VIRTUAL) + NDP_Interval);
> +    ndp_send_ra((Slirp *)opaque);
> +}
> +
> +void icmp6_init(Slirp *slirp)
> +{
> +    srand(time(NULL));
> +    ra_timer = timer_new_s(QEMU_CLOCK_VIRTUAL, ra_timer_handler, slirp);
> +    timer_mod(ra_timer, qemu_clock_get_s(QEMU_CLOCK_VIRTUAL) + NDP_Interval);
> +}

Should the granularity of the timer really be seconds?  Or should you
use the existing milli/nanosecond interface and scale the interval, so
that you really get a uniformly distributed random value, even for very
small MaxRtrAdvInterval (e.g. for min=3, max=4 you won't get any other
value than 3 or 4, which is not really uniformly distributed).

Paolo

Reply via email to