On Tue, Sep 27, 2016 at 8:34 PM, 'Bill Hart' via sage-devel <
sage-devel@googlegroups.com> wrote:

>
>
> On Tuesday, 27 September 2016 20:53:28 UTC+2, Jonathan Bober wrote:
>>
>> On Tue, Sep 27, 2016 at 7:18 PM, 'Bill Hart' via sage-devel <
>> sage-...@googlegroups.com> wrote:
>>
>>> I'm pretty sure the charpoly routine in Flint is much more recent that 2
>>> years. Are you referring to a Sage implementation on top of Flint
>>> arithmetic or something?
>>>
>>
>> It is just a problem with Sage.
>>
>
> Sure, I realised the problem was in Sage. I just wasn't sure if the
> algorithm itself is implemented in Flint or Sage.
>
>
>> Sorry, I thought I was clear about that. I assume that no one has been
>> using the algorithm='flint' option in Sage in the last two years, which
>> makes sense, because most people aren't going to bother changing the
>> default.
>>
>>
>>> The only timing that I can find right at the moment had us about 5x
>>> faster than Sage. It's not in a released version of Flint though, just in
>>> master.
>>>
>>
>> That sounds really nice. On my laptop with current Sage, it might be the
>> other way around. With Sage 7.3 on my laptop, with this particular matrix,
>> I get
>>
>
> Yes, Sage/Linbox was about 2.5x times faster than the old charpoly routine
> in Flint, I believe. The new one is quite recent and much quicker.
>
>
>> sage: %time f = A.charpoly(algorithm='flint')
>> CPU times: user 1min 24s, sys: 24 ms, total: 1min 24s
>> Wall time: 1min 24s
>>
>> sage: %time f = A.charpoly(algorithm='linbox')
>> CPU times: user 13.3 s, sys: 4 ms, total: 13.3 s
>> Wall time: 13.3 s
>>
>> However, perhaps the average runtime with linbox is infinity. (Also, this
>> in an out of date Linbox.)
>>
>> I think that Linbox may be "cheating" in a way that Flint is not. I'm
>> pretty sure both implementations work mod p (or p^n?) for a bunch of p and
>> reconstruct. From my reading of the Flint source code (actually, I didn't
>> check the version in Sage) and comments from Clement Pernet, I think that
>> Flint uses an explicit Hadamard bound to determine how many primes to use,
>> while Linbox just waits for the CRT'd polynomial to stabilize for a few
>> primes.
>>
>
> Ouch!
>
> Yes, in the new code we use an explicit proven bound. I can't quite recall
> all the details now, but I recall it is multimodular.
>
> I would give it a good amount of testing before trusting it. We've done
> quite a lot of serious testing of it and the test code is nontrivial, but
> some real world tests are much more likely to shake out any bugs, including
> the possibility I screwed up the implementation of the bound.
>

Ah, yes, I'm wrong again, as the multimodular in Flint is pretty new. I
didn't look at what Sage has until now (flint 2.5.2, which looks likes it
uses a fairly simple O(n^4) algorithm). I had previously looked at the
source code of the version of flint that I've actually been using myself,
which is from June. As I now recall (after reading an email I sent in June)
I'm using a "non-released" version precisely for the nmod_mat_charpoly()
function, which doesn't exist in the most recent release (which I guess
might be 2.5.2, but flintlib.org seems to be having problems at the moment).

I've actually done some fairly extensive real world semi-testing of
nmod_mat_charpoly() in the last few months (for almost the same reasons
that have lead me to investigate Sage/Linbox) but not fmpz_mat_charpoly().
"semi" means that I haven't actually checked that the answers are correct.
I'm actually computing characteristic polynomials of integer matrices, but
writing down the integer matrices is too expensive, so I'm computing the
polynomials more efficiently mod p and then CRTing. Also, I'm doing exactly
what I think Linbox does, in that I am just waiting for the polynomials to
stabilize. Step 2, when it eventually happens, will separately compute the
roots of these polynomials numerically, which will (heuristically) verify
that they are correct. (Step 3 might involve actually proving somehow that
everything is correct, but I strongly fear that it might involve confessing
that everything is actually only "obviously" correct.) Once step 2 happens,
I'll either report some problems or let you know that everything went well.


>
>
>> I have no idea how much of a difference that makes in this case.
>>
>>
>>> Bill.
>>>
>>> On Tuesday, 27 September 2016 05:49:47 UTC+2, Jonathan Bober wrote:
>>>>
>>>> On Tue, Sep 27, 2016 at 4:18 AM, William Stein <wst...@gmail.com>
>>>> wrote:
>>>>
>>>>> On Mon, Sep 26, 2016 at 6:55 PM, Jonathan Bober <jwb...@gmail.com>
>>>>> wrote:
>>>>> > On Mon, Sep 26, 2016 at 11:52 PM, William Stein <wst...@gmail.com>
>>>>> wrote:
>>>>>
>>>>> >>
>>>>> >> On Mon, Sep 26, 2016 at 3:27 PM, Jonathan Bober <jwb...@gmail.com>
>>>>> wrote:
>>>>> >> > In the matrix_integer_dense charpoly() function, there is a note
>>>>> in the
>>>>> >> > docstring which says "Linbox charpoly disabled on 64-bit
>>>>> machines, since
>>>>> >> > it
>>>>> >> > hangs in many cases."
>>>>> >> >
>>>>> >> > As far as I can tell, that is not true, in the sense that (1) I
>>>>> have
>>>>> >> > 64-bit
>>>>> >> > machines, and Linbox charpoly is not disabled, (2)
>>>>> >> > charpoly(algorithm='flint') is so horribly broken that if it were
>>>>> ever
>>>>> >> > used
>>>>> >> > it should be quickly noticed that it is broken, and (3) I can't
>>>>> see
>>>>> >> > anywhere
>>>>> >> > where it is actually disabled.
>>>>> >> >
>>>>> >> > So I actually just submitted a patch which removes this note while
>>>>> >> > fixing
>>>>> >> > point (2). (Trac #21596).
>>>>> >> >
>>>>> >> > However...
>>>>> >> >
>>>>> >> > In some testing I'm noticing problems with charpoly(), so I'm
>>>>> wondering
>>>>> >> > where that message came from, and who knows something about it.
>>>>> >>
>>>>> >> Do you know about "git blame", or the "blame" button when viewing
>>>>> any
>>>>> >> file here: https://github.com/sagemath/sage/tree/master/src
>>>>> >
>>>>> >
>>>>> > Ah, yes. Of course I know about that. And it was you!
>>>>> >
>>>>> > You added that message here:
>>>>>
>>>>> Dang... I had a bad feeling that would be the conclusion :-)
>>>>>
>>>>
>>>> Well, I'm sure you've done one or two things in the meantime that will
>>>> allow me to forgive this one oversight.
>>>>
>>>>
>>>>> In my defense, Linbox/FLINT have themselves changed a lot over the
>>>>> years...  We added Linbox in 2007, I think.
>>>>>
>>>>>
>>>> Yes. As I said, this comment, and the design change, is ancient. In
>>>> some limiting testing, linbox tends to be faster than flint, but has very
>>>> high variance in the timings. (I haven't actually checked flint much.)
>>>> Right now I'm running the following code on 64 cores, which should test
>>>> linbox:
>>>>
>>>> import time
>>>>
>>>> @parallel
>>>> def test(n):
>>>>     start = time.clock()
>>>>     f = B.charpoly()
>>>>     end = time.clock()
>>>>     runtime = end - start
>>>>     if f != g:
>>>>         print n, 'ohno'
>>>>         return runtime, 'ohno'
>>>>     else:
>>>>         return runtime, 'ok'
>>>>
>>>> A = load('hecke_matrix')
>>>> A._clear_cache()
>>>> B, denom = A._clear_denom()
>>>> g = B.charpoly()
>>>> B._clear_cache()
>>>>
>>>> import sys
>>>>
>>>> for result in test(range(100000)):
>>>>     print result[0][0][0], ' '.join([str(x) for x in result[1]])
>>>>     sys.stdout.flush()
>>>>
>>>> where the file hecke_matrix was produced by
>>>>
>>>> sage: M = ModularSymbols(3633, 2, -1)
>>>> sage: S = M.cuspidal_subspace().new_subspace()
>>>> sage: H = S.hecke_matrix(2)
>>>> sage: H.save('hecke_matrix')
>>>>
>>>> and the results are interesting:
>>>>
>>>> jb12407@lmfdb5:~/sage-bug$ sort -n -k 2 test_output3 | head
>>>> 30 27.98 ok
>>>> 64 28.0 ok
>>>> 2762 28.02 ok
>>>> 2790 28.02 ok
>>>> 3066 28.02 ok
>>>> 3495 28.03 ok
>>>> 3540 28.03 ok
>>>> 292 28.04 ok
>>>> 437 28.04 ok
>>>> 941 28.04 ok
>>>>
>>>> jb12407@lmfdb5:~/sage-bug$ sort -n -k 2 test_output3 | tail
>>>> 817 2426.04 ok
>>>> 1487 2466.3 ok
>>>> 1440 2686.43 ok
>>>> 459 2745.74 ok
>>>> 776 2994.01 ok
>>>> 912 3166.9 ok
>>>> 56 3189.98 ok
>>>> 546 3278.22 ok
>>>> 1008 3322.74 ok
>>>> 881 3392.73 ok
>>>>
>>>> jb12407@lmfdb5:~/sage-bug$ python analyze_output.py test_output3
>>>> average time: 53.9404572616
>>>> unfinished: [490, 523, 1009, 1132, 1274, 1319, 1589, 1726, 1955, 2019,
>>>> 2283, 2418, 2500, 2598, 2826, 2979, 2982, 3030, 3057, 3112, 3166, 3190,
>>>> 3199, 3210, 3273, 3310, 3358, 3401, 3407, 3434, 3481, 3487, 3534, 3546,
>>>> 3593, 3594, 3681, 3685, 3695, 3748, 3782, 3812, 3858, 3864, 3887]
>>>>
>>>> There hasn't yet been an ohno, but on a similar run of 5000 tests
>>>> computing A.charpoly() instead of B I have 1 ohno and 4 still running after
>>>> 5 hours. (So I'm expecting an error in the morning...)
>>>>
>>>> I think that maybe I was getting a higher error rate in Sage 7.3. The
>>>> current beta is using a newer linbox, so maybe it fixed something, but
>>>> maybe it isn't quite fixed.
>>>>
>>>> Maybe I should use a small matrix to run more tests more quickly, but
>>>> this came from a "real world" example.
>>>>
>>>>
>>>>> --
>>>>> William (http://wstein.org)
>>>>>
>>>>> --
>>>>> You received this message because you are subscribed to the Google
>>>>> Groups "sage-devel" group.
>>>>> To unsubscribe from this group and stop receiving emails from it, send
>>>>> an email to sage-devel+...@googlegroups.com.
>>>>> To post to this group, send email to sage-...@googlegroups.com.
>>>>> Visit this group at https://groups.google.com/group/sage-devel.
>>>>> For more options, visit https://groups.google.com/d/optout.
>>>>>
>>>>
>>>> --
>>> You received this message because you are subscribed to the Google
>>> Groups "sage-devel" group.
>>> To unsubscribe from this group and stop receiving emails from it, send
>>> an email to sage-devel+...@googlegroups.com.
>>> To post to this group, send email to sage-...@googlegroups.com.
>>> Visit this group at https://groups.google.com/group/sage-devel.
>>> For more options, visit https://groups.google.com/d/optout.
>>>
>>
>> --
> You received this message because you are subscribed to the Google Groups
> "sage-devel" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to sage-devel+unsubscr...@googlegroups.com.
> To post to this group, send email to sage-devel@googlegroups.com.
> Visit this group at https://groups.google.com/group/sage-devel.
> For more options, visit https://groups.google.com/d/optout.
>

-- 
You received this message because you are subscribed to the Google Groups 
"sage-devel" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to sage-devel+unsubscr...@googlegroups.com.
To post to this group, send email to sage-devel@googlegroups.com.
Visit this group at https://groups.google.com/group/sage-devel.
For more options, visit https://groups.google.com/d/optout.

Reply via email to