On Wed, Jul 1, 2009 at 10:23 PM, Jason Grout<jason-s...@creativetrax.com> wrote:
>
> Ondrej Certik wrote:
>> On Wed, Jul 1, 2009 at 8:56 PM, Jason Grout<jason-s...@creativetrax.com> 
>> wrote:
>>> Ondrej Certik wrote:
>>>> On Wed, Jul 1, 2009 at 6:33 PM, William Stein<wst...@gmail.com> wrote:
>>>>> 2009/7/2 Stéfan van der Walt <ste...@sun.ac.za>:
>>>>>> 2009/7/1 William Stein <wst...@gmail.com>:
>>>>>>> Perhaps I'm missing the point, but I'm taking this as a message to
>>>>>>> focus in Sage more on the algebraic/symbolic side of mathematics
>>>>>>> (e.g., Magma, Maple, Mathematica) rather than the numerical side, at
>>>>>>> least for the time being.    I don't have a problem with that
>>>>>>> personally, since that is what I do best, and where most of my
>>>>>>> personal interests are.
>>>>>> I'm joining this conversation late, so I am glad to see the
>>>>>> conclusions reached so far (not to give up on numerics!).
>>>>>>
>>>>>> If I may highlight a distinction (maybe obvious to some) between SAGE
>>>>>> and NumPy-based experiments:
>>>>>>
>>>>>> Sage provides a "language" for eloquently expressing
>>>>>> algebraic/symbolical problems.  On the other hand, NumPy is mainly a
>>>>>> library (that provides a data structure with accompanying operations).
>>>>>>
>>>>>> This means that users of that library expect to run their code
>>>>>> unmodified on any Python platform where it is available (Sage
>>>>>> included).  Whether this expectation is reasonable or not is up for
>>>>>> debate, but I certainly found it surprising that I had to modify my
>>>>>> code in order to compute things in Sage.
>>>>> Either that, or you click on the "python" switch at the top of the
>>>>> notebook or type "sage -ipython", or from within Sage you type
>>>>> "preparser(False)".
>>>>>
>>>>>> On a more practical level, it frightens me that Maxima spawns so
>>>>>> easily without my even knowing, simply by refering to a certain
>>>>>> variable or by using the wrong "exp".
>>>>> FYI, that is no longer the case.  In Sage-4.0, we replaced Maxima by
>>>>> the C++ library Ginac (http://www.ginac.de/) for all basic symbolic
>>>>> manipulation.
>>>>>
>>>>>>  That's the kind of thing that kills numerics performance!
>>>>> There is often a tension between numerics performance and correct
>>>>> answers.  The following is in MATLAB:
>>>>>
>>>>>>> format rat;
>>>>>>> a = [-101, 208, 105; 76, -187, 76]
>>>>>>> rref(a)
>>>>> ans =
>>>>>       1              0          -2567/223
>>>>>       0              1          -3839/755
>>>>>
>>>>> The same echelon form in Sage:
>>>>>
>>>>> a = matrix(QQ, 2, [-101, 208, 105,   76, -187, 76])
>>>>> a.echelon_form()
>>>>> [          1           0 -35443/3079]
>>>>> [          0           1 -15656/3079]
>>>>>
>>>>> Trying the same computation on larger matrices, and one sees that
>>>>> matlab is way faster than Sage.  But of course the answers are
>>>>> nonsense... to anybody not doing numerics.  To a numerical person they
>>>>> mean something, because matlab is really just doing everything with
>>>>> floats, and "format rat" just makes them print as rational
>>>>> approximations to those floats.
>>>>>
>>>>> So indeed, mixing numerics with mathematics is a very difficult
>>>>> problem, and nobody really seems to have solved it to everybody's
>>>>> satisfaction.
>>>> I think people need both approaches, but I why you cannot just pass an
>>>> option to echelon_form() to use fast floating point numbers (besides
>>>> nobody yet implementing it)? Then we can have both.
>>>
>>> Because it is pretty easy to do:
>>>
>>> A.change_ring(RR).echelon_form()
>>>
>>> which also allows things like
>>>
>>> A.change_ring(RealField(200)).echelon_form()
>>>
>>> for extended precision.
>>>
>>> Is this not sufficient?
>>
>> If it's as fast as numpy, then I think it's sufficient.
>
> Numpy does not do rref because it has limited utility for approximate
> numeric matrices.  See this thread:
> http://www.mail-archive.com/numpy-discuss...@scipy.org/msg13880.html
>
> If you want to have Sage apply the generic algorithm (*not* using
> partial pivoting!) to a numpy matrix, you can do
> A.change_ring(RDF).echelon_form() (this actually uses numpy arrays
> behind the scenes).  As pointed out in the thread noted above, this may
> just end up being nonsense (as it is with Matlab in the above example!).
>
> I think the point here is that Matlab obscures the truth, and it can't
> do any better.  In Sage, if it looks like your matrices contain
> fractions, then they really do have exact fractions.  If your matrix
> actually contains approximate floating point numbers, then Sage doesn't
> lie to you and pretend it has nice exact-looking fractions.  This makes
> for some interesting class discussions in linear algebra, especially
> when you have lots of engineering students :).

Ok, I didn't realize this example is nonsense for floating point
numbers, as explained in the numpy thread. I agree that matlab can't
do better with this.

Ondrej

--~--~---------~--~----~------------~-------~--~----~
To post to this group, send email to sage-devel@googlegroups.com
To unsubscribe from this group, send email to 
sage-devel-unsubscr...@googlegroups.com
For more options, visit this group at http://groups.google.com/group/sage-devel
URLs: http://www.sagemath.org
-~----------~----~----~----~------~----~------~--~---

Reply via email to