On Fri, Aug 8, 2014 at 6:57 PM, rjf <fate...@gmail.com> wrote: > > > On Thursday, August 7, 2014 10:55:37 PM UTC-7, Robert Bradshaw wrote: >> >> On Thu, Aug 7, 2014 at 9:02 AM, rjf <fat...@gmail.com> wrote: >> > >> > >> > On Wednesday, August 6, 2014 8:11:21 PM UTC-7, Robert Bradshaw wrote: >> >> >> >> >> >> >> >> The are two representations of the same canonical object. >> > >> > >> > The (computer algebra) use of the term, as in "simplified to a canonical >> > form" means >> > the representation is canonical. It doesn't make much sense to claim >> > that >> > all these >> > are canonical: 1+1, 2, 2*x^0, sin(x)^2+cos(x)^2 + exp(0). >> >> The point was that there's a canonical domain in which to do the >> computation. > > I have not previously encountered the term "canonical domain". There is > a CAS literature which includes the concept of simplification to a canonical > form. > There is also a useful concept of a zero-equivalence test, whereby E1-E2 > can be shown to be zero, although there is not necessarily a simplification > routine that will "canonically simplify" E1 to E3 and also E2 to E3.
You have to think beyond just the limited domain of a computer *algebra* system. If I want to do arithmetic between a \in Z and b \in Z/nZ, I could either lift b to Z or push a down to Z/nZ. Only one of these maps is canonical. >> We also have an object called the ring of integers, but really it's >> the ring of integers that fits into the memory of your computer. >> Should we not call it a Ring? > > The domain of arbitrary-precision integers is an excellent model of the > ring of integers. It is true that one can specify a computation that would > fill up the memory of all the computers in existence. or even all the atoms > in the (known?) universe. Presumably a well-constructed support system > will give an error message on much smaller examples. I assume > that your Real Field operation of division would give an error if the > result is inexact. Such a system would be pedantic to the point of being unuseful. >> >> >> It is more conservative to convert operands to the domain with less >> >> >> precision. >> >> > >> >> > Why do you say that? You can always exactly convert a float number >> >> > in >> >> > radix b to >> >> > an equal number of higher precision in radix b by appending zeros. >> >> > So it is more conserving (of values) to do so, rather than clipping >> >> > off >> >> > bits from the other. >> >> >> >> Clipping bits (or digits) is exactly how one is taught to deal with >> >> significant figures in grade school, and follows the principle of >> >> least surprise (though floating point numbers like to play surprises >> >> on you no matter what). It's also what floating point arithmetic does >> >> when the exponent is different. >> > >> > >> > It is of course also taught in physics and chemistry labs, and I used >> > this >> > myself in the days when slide-rules were used and you could read only >> > 3 or so significant figures. That doesn't make it suitable for a >> > computer >> > system. There are many things you learn along the way that are >> > simplified >> > versions of the more fully elaborated systems of higher math. >> > What did you know about the branch cuts in the complex logarithm >> > or log(-1) when you were first introduced to log? >> >> Only being able to store 53 significant bits is completely analogous >> to only being able to read 3 significant (decimal) figures. > > > Actually this analogy is false. The 3 digits (sometimes 4) from a slide > rule are the best that can be read out because of the inherent uncertainty > in the rulings and construction of the slide rule, the human eye reading > the lines, etc. So if I read my slide rule and say 0.25 it is because I > think > it is closer to 0.25 than 0.24 or 0.26 There is uncertainty there. > If a floating point number is computed as 0.25, there is no uncertainty in > the representation per se. It is 1/4, exactly a binary fraction, etc. > Now you could use this representation in various ways, e.g. > 0.25+-0.01 storing 2 numbers representing a center and a "radius" > or an interval or ..../ But the floating point number itself is simply > a computer representation of a particular rational number aaa x 2^bbb > Nothing more, nothing less. And in particular it does NOT mean > that bits 54,55,56... are uncertain. Those bits do not exist in the > representation > and are irrelevant for ascertaining the value of the number aaa x 2^bbb. > > So the analogy is false. I would argue that most floating point numbers are either (1) real-world measurements or (2) intermediate results, both of which are (again, likely) approximations to the value they're representing. When they are measured/stored, they are truncated due to the "construction of the [machine], the [sensor] reading the [values], etc." Thus the analogy. > On the other hand, the > >> >> I think >> the analogy is very suitable for a computer system. It can clearly be >> made much more rigorous and precise. > > What you are referring to is sometimes called significance arithmetic, > and it has been thoroughly discredited. > Sadly, Wolfram the physicist put it in Mathematica. Nope, that's not what I'm referring to. >> Or are you seriously proposing when adding 3.14159 and 1e-100 it makes >> more sense, by default, to pad the left hand side with zeros (whether >> in binary or decimal) and return 3.1415900000...0001 as the result? > > > If you did so, you would preserve the identity (a+b)-a = b > > If you round to some number of bits, say 53, with a=3.14159 and b=1e-100, > the left side is 0, and the right side is 1e-100. The relative error in > the answer > is, um, infinite. > > Now if the user specified the kind of arithmetic explicitly, or even > implicitly > by saying "use IEEE754 binary floating point arithmetic everywhere" then > I could go along with that. You would suggest that this be IEEE754 be requested by the user (perhaps globally) before using it? Is that how maxima works? (I think not.) >> > So it sounds like you actually read the input as 13/10, because only >> > then >> > can >> > you approximate it to higher precision than 53 bits or whatever. Why >> > not >> > just admit this instead of talking >> > about 1.3. >> >> In this case the user gives us a decimal literal. Yes, this literal is >> equal to 13/10. We defer interpreting this as a 53-bit binary floating >> point number long enough for the user to tell us to interpret it >> differently. This prevents surprises like >> >> sage: RealField(100)(float(1.3)) >> 1.3000000000000000444089209850 >> >> or, more subtly >> >> sage: sqrt(RealField(100)(float(1.3))) >> 1.1401754250991379986106491649 >> >> instead of >> >> sage: sqrt(RealField(100)(1.3)) >> 1.1401754250991379791360490256 >> >> When you write 1.3, do you really think 5854679515581645 / >> 4503599627370496, or is your head really thinking "the closest thing >> to 13/10 that I can get given my choice of floating point >> representation?" I bet it's the latter, which is why we do what we do. > > I suspect it is not what python does. It is, in the degenerate case that Python only has one native choice of floating point representation. It's also what (to bring things full circle), Julia does too. > It is what Macsyma does if you write 1.3b0 to indicate "bigfloat". You're still skirting the question of whether *you* mean when you write 1.3. - Robert -- You received this message because you are subscribed to the Google Groups "sage-devel" group. To unsubscribe from this group and stop receiving emails from it, send an email to sage-devel+unsubscr...@googlegroups.com. To post to this group, send email to sage-devel@googlegroups.com. Visit this group at http://groups.google.com/group/sage-devel. For more options, visit https://groups.google.com/d/optout.