This is explained in the .c file with a kernel doc. Basically the
difference is that timer16 could silently crop the precision, while
utimer16 could not thus explicitly accepts u16 argument (max. timer
interval with usec precision fits in u16).

Maybe I'm confused what the utility is of cropping the precision in this
way is.  I'd also say that _timer16 is poorly named to convey the
behavior. I'm not sure what to call it because I still dont get exactly
why you'd want the precision cropped.

Precision matters for FHCI-like drivers, when driver, for example,
schedule transactions via the GTM timers, and there timings matters
a lot.

Though, timer16 crops the precision _only_ if usecs > 65535, so FHCI
_can_ still use the _timer16 (because FHCI does not request intervals
65535). But I implemented two function because:

1. I think we don't need unnecessary stuff in the ISRs (this is weak
  argument since I didn't measure the impact).
2. I wanted to make the API clear (seem to fail this undertaking :-),
  which functions will behave exactly the way you asked it (utimer16),
  and which functions will _silently_ crop the precision (timer16)
  (if asked for 1001000 usecs, it will give you ~~1001000, depending
  on the GTM frequency).

I'm fine w/having both. I think they are poorly named. I'd also call them _set_timer but that's just me.

Maybe something w/the term _exact_ in the name. Is it the case w/the precise form we'd have no prescaling (if so maybe a comment in the API about that would help clarity)?

- k
_______________________________________________
Linuxppc-dev mailing list
Linuxppc-dev@ozlabs.org
https://ozlabs.org/mailman/listinfo/linuxppc-dev

Reply via email to