Hi,
Update: I think I finally found the bug that led some rare computations to hang forever: givaro's
random iterator was seed from the 6 digits of the current time microseconds, and could, with proba
10^-6 be seeded with 0, and the congurential generator would then always output 0, causing the
search for a non-zero krylov vector to hang forever!
This might be also a fix to https://trac.sagemath.org/ticket/15535
Meanwhile I also found a corner case with a bug in LUKrylov charpoly. I posted a patch to both
givaro and fflas-ffpack in https://trac.sagemath.org/ticket/21579
As for the choice of running an early termination for charpoly over ZZ, we already had this
discussion a very long time ago, and it used to be a deterministic call with Hadamard's bound.
It has been unfortunately turned back to the early termination version since
then, which I agree is bad.
I'm cleaning up this code, and will provide a boolean argument proof to enable/disable the early
termination on request.
Clément
Le 28/09/2016 à 08:22, parisse a écrit :
Le mercredi 28 septembre 2016 03:13:11 UTC+2, Jonathan Bober a écrit :
Ah, yes, I'm wrong again, as the multimodular in Flint is pretty new. I
didn't look at what Sage
has until now (flint 2.5.2, which looks likes it uses a fairly simple
O(n^4) algorithm). I had
previously looked at the source code of the version of flint that I've
actually been using
myself, which is from June. As I now recall (after reading an email I sent
in June) I'm using a
"non-released" version precisely for the nmod_mat_charpoly() function,
which doesn't exist in
the most recent release (which I guess might be 2.5.2, but flintlib.org
<http://flintlib.org>
seems to be having problems at the moment).
I've actually done some fairly extensive real world semi-testing of
nmod_mat_charpoly() in the
last few months (for almost the same reasons that have lead me to
investigate Sage/Linbox) but
not fmpz_mat_charpoly(). "semi" means that I haven't actually checked that
the answers are
correct. I'm actually computing characteristic polynomials of integer
matrices, but writing down
the integer matrices is too expensive, so I'm computing the polynomials
more efficiently mod p
and then CRTing. Also, I'm doing exactly what I think Linbox does, in that
I am just waiting for
the polynomials to stabilize. Step 2, when it eventually happens, will
separately compute the
roots of these polynomials numerically, which will (heuristically) verify
that they are correct.
(Step 3 might involve actually proving somehow that everything is correct,
but I strongly fear
that it might involve confessing that everything is actually only
"obviously" correct.) Once
step 2 happens, I'll either report some problems or let you know that
everything went well.
I don't think computing roots numerically is a good idea, because the charpoly
is ill-conditionned.
It would be interesting to compare outputs and timings with other packages, for
example giac has its
own multi-modular charpoly implementation (multi-modular, with probabilistic
answer if
proba_epsilon>0 or certified answer if proba_epsilon=0).
--
You received this message because you are subscribed to the Google Groups
"sage-devel" group.
To unsubscribe from this group and stop receiving emails from it, send an email
to
sage-devel+unsubscr...@googlegroups.com
<mailto:sage-devel+unsubscr...@googlegroups.com>.
To post to this group, send email to sage-devel@googlegroups.com
<mailto:sage-devel@googlegroups.com>.
Visit this group at https://groups.google.com/group/sage-devel.
For more options, visit https://groups.google.com/d/optout.
--
You received this message because you are subscribed to the Google Groups
"sage-devel" group.
To unsubscribe from this group and stop receiving emails from it, send an email
to sage-devel+unsubscr...@googlegroups.com.
To post to this group, send email to sage-devel@googlegroups.com.
Visit this group at https://groups.google.com/group/sage-devel.
For more options, visit https://groups.google.com/d/optout.