On Friday, 22 August 2014 21:58:15 UTC+2, rjf wrote:
>
>
>
> On Wednesday, August 20, 2014 9:54:02 PM UTC-7, Robert Bradshaw wrote:
>>
>> On Fri, Aug 8, 2014 at 6:57 PM, rjf <fat...@gmail.com> wrote: 
>> > 
>> > 
>> > On Thursday, August 7, 2014 10:55:37 PM UTC-7, Robert Bradshaw wrote: 
>> >> 
>> >> On Thu, Aug 7, 2014 at 9:02 AM, rjf <fat...@gmail.com> wrote: 
>> >> > 
>> >> > 
>> >> > On Wednesday, August 6, 2014 8:11:21 PM UTC-7, Robert Bradshaw 
>> wrote: 
>> >> >> 
>> >> >> 
>> >> >> 
>> >> >> The are two representations of the same canonical object. 
>> >> > 
>> >> > 
>> >> > The (computer algebra) use of the term, as in "simplified to a 
>> canonical 
>> >> > form"  means 
>> >> > the representation is canonical.  It doesn't make much sense to 
>> claim 
>> >> > that 
>> >> > all these 
>> >> > are canonical:   1+1, 2,  2*x^0,  sin(x)^2+cos(x)^2 + exp(0). 
>> >> 
>> >> The point was that there's a canonical domain in which to do the 
>> >> computation. 
>> > 
>> > I have not previously encountered the term "canonical domain".  There 
>> is 
>> > a CAS literature which includes the concept of simplification to a 
>> canonical 
>> > form. 
>> > There is also a useful concept of a zero-equivalence test, whereby 
>> E1-E2 
>> > can be shown to be zero, although there is not necessarily a 
>> simplification 
>> > routine that will "canonically simplify"  E1  to E3 and also E2 to E3. 
>>
>> You have to think beyond just the limited domain of a computer *algebra* 
>> system. 
>>
>
> Actually I am thinking in terms of computer representation, not just a CAS.
> You appear to be thinking in some extra-computational way that bits are 
> not bits.
>
> There is a quote from Lewis Carroll's Humpty Dumpty, to the effect that
> words mean whatever he says they mean,… who, after all, is the master.
>
> You and I apparently disagree about the term "canonical".
>  
>
>>
>> If I want to do arithmetic between a \in Z and b \in Z/nZ, I could 
>> either lift b to Z or push a down to Z/nZ. Only one of these maps is 
>> canonical. 
>>
>
> I don't know about canonical maps.  The term "canonical representation"
> makes sense to me.
>

He means this. In algebra Z/nZ is actually a ring modulo an ideal. Z is the 
ring, nZ is the ideal.

The elements of Z/nZ are usually written a + nZ. It means precisely this:

a + nZ = { x in Z such that x = a +nk for some k in Z }

So it is a *set* of numbers (actually, in algebra it is called a coset).

When you write Mod(a, n) you really mean a + nZ if you are an algebraist, 
i.e. the *set* of numbers congruent to a modulo n.

So Z/nZ consists of (co)sets of the form C = Mod(a, n) = a + nZ. If you 
like Z/nZ is a set of sets (actually a group of cosets in algebra).

There is a canonical map from Z to Z/nZ: we map b in Z to the (co)set C in 
Z/nZ that contains b. 

If a = b mod n then C = mod(a, n) is the *only* one of these sets which 
contains b. So the map is canonical. We only have one choice.

But there is no canonical map going the other way. For any such (co)set C, 
which element of Z are you going to pick? Any of the elements of C will do.

You can make an arbitrary choice, e.g. pick the unique element of C in the 
range [0, n). Or you could pick the unique element in the range (-n/2, 
n/2]. But that is not a canonical choice. You had to make an arbitrary 
choice. Someone else may have made a different choice. (Indeed some people 
use the latter choice because some algorithms, such as gcd and Jacobi 
symbols run faster with this choice.)

So canonical map means there is only one way you could define the map. You 
don't need to tell anyone what your map is, because they *have* to make the 
same choice as you, as there are no alternatives. That's the meaning of 
canonical in mathematics.

It's probably not a terminology they teach in Computer Algebra, but it is 
taught to undergraduates around the world in pure mathematics.

The whole argument here is that because only one direction gives a 
canonical map, coercion must only proceed in that direction. Otherwise your 
computer algebra system is making a choice that someone else's computer 
algebra system does not.
  

>  
>
>>
>> >> We also have an object called the ring of integers, but really it's 
>> >> the ring of integers that fits into the memory of your computer. 
>> >> Should we not call it a Ring? 
>> > 
>> > The domain of arbitrary-precision integers is an excellent model of the 
>> > ring of integers.  It is true that one can specify a computation that 
>> would 
>> > fill up the memory of all the computers in existence. or even all the 
>> atoms 
>> > in the (known?) universe.  Presumably a well-constructed support system 
>> > will give an error message on much smaller examples.   I assume 
>> > that your Real Field  operation of   division would give an error if 
>> the 
>> > result is inexact. 
>>
>> Such a system would be pedantic to the point of being unuseful. 
>>
>
> Quite the contrary. IEEE 754 specifies an "inexact" flag.
>  
>
>>
>> ….snip...
>
>  
>
>> > or  log(-1)  when you were first introduced to log? 
>> >> 
>> >> Only being able to store 53 significant bits is completely analogous 
>> >> to only being able to read 3 significant (decimal) figures. 
>> > 
>> > 
>> > Actually this analogy is false.  The 3 digits (sometimes 4) from a 
>> slide 
>> > rule are the best that can be read out because of the inherent 
>> uncertainty 
>> > in the rulings and construction of the slide rule, the human eye 
>> reading 
>> > the lines, etc.   So if I read my slide rule and say 0.25  it is 
>> because I 
>> > think 
>> > it is closer to 0.25  than 0.24 or 0.26   There is uncertainty there. 
>> > If a floating point number is computed as 0.25, there is no uncertainty 
>> in 
>> > the representation per se.  It is 1/4, exactly a binary fraction, etc. 
>> > Now you could use this representation in various ways, e.g. 
>> > 0.25+-0.01    storing 2 numbers representing a center and a "radius" 
>> > or an interval or ..../   But the floating point number itself is 
>> simply 
>> > a computer representation of a particular rational number    aaa x 
>> 2^bbb 
>> > Nothing more, nothing less.  And in particular it does NOT mean 
>> > that bits 54,55,56... are uncertain.  Those bits do not exist in the 
>> > representation 
>> > and are  irrelevant for ascertaining the value of the number aaa x 
>> 2^bbb. 
>> > 
>> > So the analogy is false. 
>>
>> I would argue that most floating point numbers are either (1) 
>> real-world measurements or (2) intermediate results, both of which are 
>> (again, likely) approximations to the value they're representing.
>
>
> You could assert this, but what is the point?  You might as well assert
> that the computer number system consists of the integers 1,2,3, infinity, 
>  because 
> (according to George Gamow) that's what some humans use for counting.
>
> When 
>> they are measured/stored, they are truncated due to the "construction 
>> of the [machine], the [sensor] reading the [values], etc." Thus the 
>> analogy. 
>>
>
> Since the computer has no inherent way of recording in a floating-point 
> number anything more than a single exact rational number, that is the 
> starting point for arithmetic.  If you want more information about the 
> possible error, you record TWO numbers.   
>
>>
>> > On the other hand, the 
>> > 
>> >> 
>> >> I think 
>> >> the analogy is very suitable for a computer system. It can clearly be 
>> >> made much more rigorous and precise. 
>> > 
>> > What you are referring to is sometimes called significance arithmetic, 
>> > and it has been thoroughly discredited. 
>> > Sadly, Wolfram the physicist put it in Mathematica. 
>>
>> Nope, that's not what I'm referring to. 
>>
> Can you provide a reference for what you ARE referring to?
>  
>
>>
>> >> Or are you seriously proposing when adding 3.14159 and 1e-100 it makes 
>> >> more sense, by default, to pad the left hand side with zeros (whether 
>> >> in binary or decimal) and return 3.1415900000...0001 as the result? 
>> > 
>> > 
>> > If you did so, you would preserve the  identity  (a+b)-a   =  b 
>> > 
>> > If you round to some number of bits, say 53,  with a=3.14159  and 
>> b=1e-100, 
>> > the left side is 0, and the right side  is 1e-100.  The relative error 
>> in 
>> > the answer 
>> > is, um, infinite. 
>> > 
>> > Now if the user specified the kind of arithmetic explicitly, or even 
>> > implicitly 
>> > by saying "use IEEE754 binary floating point arithmetic everywhere" 
>> then 
>> > I could go along with that. 
>>
>> You would suggest that this be IEEE754 be requested by the user 
>> (perhaps globally) before using it? Is that how maxima works? (I think 
>> not.) 
>>
>
> Numbers that appear with a decimal point are read in as the default float 
> of the underlying lisp system, which is, so far as I know, IEEE754 double, 
> in one form or another in the systems in which Maxima generally runs.   
> There are options for higher precisions in some lisps.  If a number is 
> written as say 1.3b0   then Mxima's software big floats are used.
>  
>
>>
>> >> > So it sounds like you actually read the input as  13/10, because 
>> only 
>> >> > then 
>> >> > can 
>> >> > you  approximate it to higher precision than 53 bits or whatever.   
>> Why 
>> >> > not 
>> >> > just admit this instead of talking 
>> >> > about 1.3. 
>> >> 
>> >> In this case the user gives us a decimal literal. Yes, this literal is 
>> >> equal to 13/10. We defer interpreting this as a 53-bit binary floating 
>> >> point number long enough for the user to tell us to interpret it 
>> >> differently. This prevents surprises like 
>> >> 
>> >> sage: RealField(100)(float(1.3)) 
>> >> 1.3000000000000000444089209850 
>> >> 
>> >> or, more subtly 
>> >> 
>> >> sage: sqrt(RealField(100)(float(1.3))) 
>> >> 1.1401754250991379986106491649 
>> >> 
>> >> instead of 
>> >> 
>> >> sage: sqrt(RealField(100)(1.3)) 
>> >> 1.1401754250991379791360490256 
>> >> 
>> >> When you write 1.3, do you really think 5854679515581645 / 
>> >> 4503599627370496, or is your head really thinking "the closest thing 
>> >> to 13/10 that I can get given my choice of floating point 
>> >> representation?" I bet it's the latter, which is why we do what we do. 
>> > 
>> > I suspect it is not what python does. 
>>
>> It is, in the degenerate case that Python only has one native choice 
>> of floating point representation. It's also what (to bring things full 
>> circle), Julia does too. 
>>
>> > It is what Macsyma does if you write 1.3b0   to indicate "bigfloat". 
>>
>> You're still skirting the question of whether *you* mean when you write 
>> 1.3. 
>>
>
>  It is irrelevant in general what I mean; the questions seem to be what is 
> mathematically appropriate and/or what does the user expect.
>
> Depending on what computer system I am using, I expect different semantics 
> for 1.3 --- FORTRAN/LISP/Mathematica/Maxima/MockMMA.
>
> For example, in MockMMA, a system I wrote, 1.3  is exactly 13/10.
> In FORTRAN  1.3d0  and 1.3e0 mean possibly different things.
> In Mathematica, 1.3 means different things depending on how many zeros
> follow it.
> 1.3000000000000000000000   vs
> 1.3000000000000000000
> etc
>
> RJF
>
>  
>
>>
>> - Robert 
>>
>

-- 
You received this message because you are subscribed to the Google Groups 
"sage-devel" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sage-devel+unsubscr...@googlegroups.com.
To post to this group, send email to sage-devel@googlegroups.com.
Visit this group at http://groups.google.com/group/sage-devel.
For more options, visit https://groups.google.com/d/optout.

Reply via email to