Pat LeSmithe wrote:
As Jason said, the docstrings do indicate what model (simple, for now)
we're using to transform colors. We mention in several places in
colors.py docstrings that we reduce R, G, and B components modulo one.
But we could be more explicit.
Anyway, we could make it possible to choose between this and "capped"
behavior with a sage.plot.colors.MODULE_SCOPE_VARIABLE or another way.
By the way, is it possible to do the equivalent of
plot(sin(x), (x, -10, 10), color=Color(x, 1-x, x))
(succinctly)? Wrap-around might be useful here.
On 03/05/2010 05:04 PM, Dr. David Kirkby wrote:
Expected:
RGB color (0.51829585732141792, 0.49333037605210095, 0.0)
Got:
RGB color (0.51829585732141814, 0.49333037605210117, 0.0)
Does this stem from the slightly different value of e on Solaris?
http://trac.sagemath.org/sage_trac/ticket/8374
http://trac.sagemath.org/sage_trac/ticket/8375
I expect the difference in value of 'e' would have some caused a difference. The
whole floating point processor is slightly different, so I'm not surprised there
are differences. An easy solution is to just change the doctest to removed a few
digits, add a few ...'s, and it would pass. But I feel uneasy about doing that
myself.
It seems to me that in many cases the only justification for the "Expected"
result is what someone gets on their computer while testing their own code.
A similar, but more severe case of this occurred the other day with 'lcalc',
where there was this failure observed on what I assume was an Intel or AMD CPU -
not SPARC for sure.
Expected:
0.305999824716...
Got:
0.305999723948232
http://trac.sagemath.org/sage_trac/ticket/5396#comment:27
It was then proposed a doc test was changed from
0.305999824716...
to
0.305999..
I questioned this, then John Cremona attempted the computation of a higher
precision result, by a different means. First he got 0.017188297766 and not
anything like 0.305. Later, when John realised lcalc and his program used a
different "reference" (I don't understand the maths), he computed a high
precision result of
0.305999773834052301820483683321676474452637774590
Had that been done earlier, then the doctest should not have expected to get
0.305999824716...
Another recent test failure reported as
http://groups.google.co.uk/group/sage-devel/browse_thread/thread/5c157bf580eb7717/86aed034db61e7bd?lnk=gst&q=not+write+the+exact+of+high+precision#86aed034db61e7bd
sage: h = integral(sin(x)/x^2, (x, 1, pi/2)); h
integrate(sin(x)/x^2, x, 1, 1/2*pi)
sage: h.n()
which gave a failure on someone's computer of
Expected:
0.33944794097891573
Got:
0.33944794097891567
When I computed this to high precision using arbitrary precision arithmetic, I
got 0.33944794097891567969192717186521861799447698826918 So the failure was in
fact closer to the true value than the "Expected" value.
So summed up in one sentence, I believe the "Expected" result should be
justified as much as possible and not simply put there since that is what
someone gets when they run their own code on their own computer.
Dave
--
To post to this group, send an email to sage-devel@googlegroups.com
To unsubscribe from this group, send an email to
sage-devel+unsubscr...@googlegroups.com
For more options, visit this group at http://groups.google.com/group/sage-devel
URL: http://www.sagemath.org