On 11/14/23 06:42, Waldek Hebisch wrote:
On Sun, Nov 05, 2023 at 10:28:38AM +0800, Qian Yun wrote:
I introduce a new variable "roundingError", its valued was mistakenly
added to the 'other' class, later it was added to "total" again,
causing duplication. So I change it to only add to
the 'other' named stat.
The comparison "if n >= 0.01" is wrong, since we take 2 digits
accuracy, so it should compare against 0.005, see "significantStat".
Also "roundStat" is totally useless because it rounds to the third
digit. We can rely on "FORMAT" to do the rounding.
I suspect that intent of "if n >= 0.01" is different than you think.
Namely, small statistics are likely to have significant error
so it makes sense to supress printing of info for small classes.
I other words, it make sense to have treshold which is bigger
than treshold for rounding to 0.
How about we do no rounding at all, and print 3 digits (or more, see
following) after decimal point, and let user to decide if the last
digit is to be trusted.
AFAICS original code is first computing long statistics and if non-long
statistics are desired it throws out computed result and starts again.
It would make sense to have common code which just computes desired
statistics.
I can do this improvement.
Also, before your recent changes long format skipped "other" class
when corresponding time would print as 0 (or maybe was small). Now
it is:
Time: 0.02(evaluation) + 0.00(other) = 0.03 sec
This happened before as well (when the number is less than 0.005):
Time: 0.00(O) = 0.00 sec
One more thing: did you try to get more accurate timing? In different
context I have found useful microsecond timings: al least on Linux it
is possible to measure real time taken be a routine with microsecond
accuracy. I am not sure how much accuracy one can get from Lisp
routines, but IIRC Clozure CL reports time with microsecond resolution.
The timing accuracy is determined by $timerTicksPerSecond which is
INTERNAL-TIME-UNITS-PER-SECOND in Lisp:
GCL: 100
CLISP: 1000000 (windows: 10^7)
ECL: 1000000
SBCL: 1000000 (x86: 1000)
CMUCL: 100
CCL: 1000000 (x86: 1000)
ABCL: 1000
So that's probably why we were printing 2 digits in the past.
(And Lisps have lower accuracy due to 32-bit limitation.)
If that's the case, then the whole rounding logic did nothing
in the GCL era.
Shall we make it configurable how many digits to be printed?
- Qian
--
You received this message because you are subscribed to the Google Groups "FriCAS -
computer algebra system" group.
To unsubscribe from this group and stop receiving emails from it, send an email
to [email protected].
To view this discussion on the web visit
https://groups.google.com/d/msgid/fricas-devel/fc76ef3a-174e-489d-9895-b0e01c373d46%40gmail.com.