On Wednesday, May 06, 2015 at 12:37:38 AM, Stephen Warren wrote: > On 05/05/2015 04:17 PM, Marek Vasut wrote: > > On Tuesday, May 05, 2015 at 11:46:56 PM, Stephen Warren wrote: > >> On 05/04/2015 02:54 PM, Marek Vasut wrote: > >>> Switch to generic timer implementation from lib/time.c . > >>> This also fixes a signed overflow which was in __udelay() > >>> implementation. > >> > >> Can you explain that a bit more? > >> > >>> -void __udelay(unsigned long usec) > >>> -{ > >>> - ulong endtime; > >>> - signed long diff; > >>> - > >>> - endtime = get_timer_us(0) + usec; > >>> - > >>> - do { > >>> - ulong now = get_timer_us(0); > >>> - diff = endtime - now; > >>> - } while (diff >= 0); > >>> -} > >> > >> I believe since endtime and now hold micro seconds, there shouldn't be > >> any overflow so long as the microsecond difference fits into 31 bits, > >> i.e. so long as usec is less than ~36 minutes. I doubt anything is > >> calling __udelay() with that large of a value. Perhaps the issue this > >> patch fixes is in get_timer_us(0) instead, or something else changed as > >> a side-effect? > > > > The generic implementation caters for full 32-bit range, that's all. > > Since the argument of this function is unsigned, it can overflow if > > you use argument which is bigger than 31 bits. OK like that ? > > Sorry, I still don't understand. Both the __udelay() here and in > lib/time.c take an unsigned long argument. I don't see how switching one > out for the other can affect anything if the argument type is the issue.
So, if now is close to 0x7fffffff (which it can), then if endtime is big-ish, diff will become negative and this udelay() will not perform the correct delay, right ? > Besides, what's passing a value >~36 minutes to udelay()? Nothing, but that doesn't mean we can have a possibly broken implementation, right ? Best regards, Marek Vasut _______________________________________________ U-Boot mailing list U-Boot@lists.denx.de http://lists.denx.de/mailman/listinfo/u-boot