> Is 8000 ticks too slow? > > Is 3000 ticks acceptable? And for what reason? Are 3000 acceptable just > because we have an algorithm that performs in 3000 ticks? > > My strong preference is still to have a one-fits-all algorithm that > might very well be slower than an optimal one. But hey, an ordinary > division of a 64-bit value by 10 already costs 2300 cycles, so why > should we hunt cycles just for printf...?
Well, I went and asked the customer. As I mentioned, the motivating application is the TAPR time interval counter (TICC). Info: http://tapr.org/kits_ticc.html Source: https://github.com/TAPR/TICC (Not up to date.) Manual: http://www.tapr.org/~n8ur/TICC_Manual.pdf Basically, it timestamps input events to sub-nanosecond resolution. It prints them with picosecond (12 decimal place) resolution. E.g. fed a 1 Hz input signal, it might print: 104.897999794440 105.897999794492 106.897999794549 107.897999794551 108.897999794553 109.897999794552 110.897999794667 It would like to be able to run until 2^64 picoseconds wrap around in 213 days. Anyway, although it only prints every input transition, the main processing loop has a 1 ms schedule to meet (it *can* print at up to 1 kHz, synchronized with the USB polling interval), and of the 16,000 clock cycles available in that ms, 8000 are currently spoken for. 8000 are available for formatting and output device drivers. So yeah, they'd definitely prefer 4000 cycles to 8000. But they're going to use custom code *anyway*, since they don't want to wait for an avr-libc release, so that doesn't have to determine what avr-libc does. _______________________________________________ AVR-libc-dev mailing list AVR-libc-dev@nongnu.org https://lists.nongnu.org/mailman/listinfo/avr-libc-dev