Hi Sameeh, On Thu, 17 Mar 2016 09:37:57 +0200, sam...@daynix.com wrote: > @@ -357,6 +357,14 @@ set_interrupt_cause(E1000State *s, int index, uint32_t > val) > } > mit_update_delay(&mit_delay, s->mac_reg[ITR]); > > + /* > + * According to e1000 SPEC, the Ethernet controller guarantees > + * a maximum observable interrupt rate of 7813 interrupts/sec. > + * Thus if mit_delay < 500 then the delay should be set to the > + * minimum delay possible which is 500. > + */ > + mit_delay = (mit_delay < 500) ? 500 : mit_delay; > + > if (mit_delay) { > s->mit_timer_on = 1; > timer_mod(s->mit_timer, > qemu_clock_get_ns(QEMU_CLOCK_VIRTUAL) +
Sorry for late response. Formerly, 'mit_delay' could possibly be 0 (as being not updated by any of the mit_update_delay calls), thus 'mit_timer' wouldn't be armed. The new logic forces mit_delay to be set to 500, even if it was 0 ("unset"). Which approach is correct: - Either the 'if (mit_delay)' is now superflous, - Or, do we need to keep the "unset" sematics (i.e. mit_delay==0 means don't use the timer) Regards, Shmulik