William,

Thanks for all the helpful advice - that'll get me started.

> So you're saying that when somebody changed from  Sage just having
> "kernel" (the way I implemented it) to having left_kernel and
> right_kernel methods, they messed things up in such a way that
> left_kernel is 1000 times slower than right kernel for the above
> example?

Well, I think maybe the actual mess-up-factor is O(n^2) or O(n^3) or
something like that. :-)

I'll start at the top in matrix2.pyx and it should be straightforward
to get the left/right dichotomy straightened out.  That should then
make my grander plans easier.

Regarding a doctest.  I know they can be tagged as "random" (can't
recall the details at the moment), so it would be appropriate to put
in a large random example that *should* finish quickly?  Is there a
mechanism to determine if it runs too long?  Or do we just assume
somebody notices the delay?

> I spent some time two years ago with Martin Albrecht, and we designed
> an algorithm to compute echelon forms of matrices over QQ using IML's
> dense p-adic linear solver.   IML has a function to find the solution
> to A*X = B, and we just built on top of that function a way to compute
> echelon forms -- for very dense matrices of certain sizes it beats
> everything else out there.  That's basically the one thing all of IML
> does -- solve A*X = B.  What specifically are you looking for?

In the matrix_integer_dense class there is a method
_rational_kernel_iml  which appears to do most of it's computation via
the line

dim = nullspaceMP (self._nrows, self._ncols, self._entries, &mp_N)

(which is surrounded by a _sig_on/_sig_off pair if that helps)

I'd like to understand the exact form of the output of
_rational_kernel_iml  since my experiments tell me it is close to what
I want, but not quite.  Maybe the negatives of it's output will give
me the basis I desire, but I'd prefer to have that confirmed by some
documentation.  IML has a "nullspaceLong"  function (or similar, if
IIRC) in addition to several routines for solving linear systems.

Thanks again,
Rob



On Feb 8, 12:45 am, William Stein <wst...@gmail.com> wrote:
> On Sat, Feb 7, 2009 at 7:00 PM, Rob Beezer <goo...@beezer.cotse.net> wrote:
>
> > I've posted a couple of times about making some modifications to the
> > code for kernels of matrices.  After a lot of thought and a lot of
> > digging around, I've now got several questions that I would appreciate
> > advice on before I jump into making changes.
>
> > The itch I'm scratching is that I am adding to my linear algebra text
> > small explanations of how to use sage.  Right now, I'm stuck at null
> > spaces.  My criteria is that I'd like a student to be able to get
> > helpful information about a matrix with simple one-line calls.  I use
> > "right" kernels in the book, and sage is oriented towards "left"
> > kernels.  I also have theorems that describe bases in close relation
> > to the pivot/non-pivot columns of a matrix, rather than the "echelon
> > form" bases that sage produces by default for "canonicalization".  I
> > have no argument with sage's default behavior, I'd just like to
> > smoothly add some alternatives.
>
> > So I'd like preserve existing behavior and have code like
>
> > sage:  a = matrix(QQ, [[1,2,3],[4,5,6]])
> > sage:  a.right_kernel(echelonize=False)
>
> > provide a right kernel with an alternate basis.  Questions follow,
> > advice on any one of the below would be greatly appreciated.  Sorry
> > for all the naive questions - I'm learning a lot through the process
> > and hopefully it will pay off in future contributions.  But see the
> > timing results in #1 which could be significant.
>
> > 1.  matrix2.pyx has the high-level code for a generic kernel
> > computation in left_kernel(), kernel is an alias for left_kernel,
> > while right_kernel takes the transpose and asks for its kernel().
>
> > So I edit left_kernel to capture the desired basis midstream based on
> > the new echelonize keyword, then at the sage command-line execute
> > a.transpose().left_kernel(echelonize=False)  and I get what I want via
> > the changes in  left_kernel.  But if I execute  a.right_kernel()  (on
> > a rational dense matrix)  it seems the resulting call to  kernel()  on
> > the transpose takes me down the class hierarchy  to the  kernel() call
> > in the matrix_integer_dense  code.  This strikes me as inconsistent,
> > and the test below indicates a difference in performance for
> > mathematically equivalent computations, depending on the form of the
> > statement.
>
> > Timings for 200x200 matrices of 2-digit integers over rationals.  Only
> > 1 run per timeit() since results are cached.  Timings were consistent
> > as I scaled up the size of the matrix, so I don't think the absence of
> > several runs is a factor.  My *guess* is that right_kernel() ends up
> > in the IML routines, while left_kernel() ends up in Pari routines.
>
> > sage: a=random_matrix(ZZ, 200,200, x=100)
> > sage: b=a.change_ring(QQ)
>
> > sage: timeit("b.right_kernel()", number=1, repeat=1)
> > 1 loops, best of 1: 63.5 ms per loop
>
> > sage: timeit("b.transpose().left_kernel()", number=1, repeat=1)
> > 1 loops, best of 1: 65.9 s per loop
>
> So you're saying that when somebody changed from  Sage just having
> "kernel" (the way I implemented it) to having left_kernel and
> right_kernel methods, they messed things up in such a way that
> left_kernel is 1000 times slower than right kernel for the above
> example?
>
> I just confirmed this for myself.
>
> sage: a=random_matrix(ZZ, 100, x=100).change_ring(QQ)
> sage: time b = a.right_kernel()
> Time: CPU 0.06 s, Wall: 0.07 s
> sage: time b = a.left_kernel()
> Time: CPU 6.80 s, Wall: 7.50 s
>
> > Many computations of kernels seem to take place in the specialized
> > classes with a call to kernel().  Should the matrix2 code reflect
> > this?  In other words, should matrix2 calls to right_kernel and to
> > left_kernel  be both set to short calls to kernel(), which would then
> > ultimately seek out the correct specialized class in the hierarchy, as
> > appropriate?
>
> In short, yes, that sounds sensible to me.  And it would be great if
> there were some doctest that would take 1000 seconds if somebody
> screws up like above again, since then we would know about it.
>
> > 2.  For a rational, dense matrix it seems that the IML library is
> > used.  I've been trying to find documentation on the form of the
> > output, and it appears that a call to  nullspaceMP  is used in the
> > integer, denee class.  There is a header definition of sorts for this
> > function in the matrix_integer_dense.pyx  file, but I can't seem to
> > locate the code or documentation anywhere in the IML package, or
> > elsewhere.  Any pointers to the right place?
>
> I spent some time two years ago with Martin Albrecht, and we designed
> an algorithm to compute echelon forms of matrices over QQ using IML's
> dense p-adic linear solver.   IML has a function to find the solution
> to A*X = B, and we just built on top of that function a way to compute
> echelon forms -- for very dense matrices of certain sizes it beats
> everything else out there.  That's basically the one thing all of IML
> does -- solve A*X = B.  What specifically are you looking for?
>
> > 3.  William suggested the  echelonize=True/False  keyword with True as
> > the default, which looks consistent with the use of this verb
> > elsewhere.  Would a  basis=echelon/pivot/<future-use>  style be
> > acceptable or preferable in the event there is interest in other
> > alternative bases (I don't have any idea what those would be!).
>
> I wouldn't be opposed.   What would your students find easier to
> *read*?  I think
>
>          A.kernel(echelonize=False)
>
> is not nearly as clear as
>
>          A.kernel(basis='pivot')
>
> The latter actually tells you a lot about what basis you should
> expect. The former just says "it might not be echelonized", but
> doesn't tell you anything about what you'll actually get.   Since in
> fact I think I like your alternative suggestion better than my
> original one.
>
> > 4.  If I add the new keyword to calls in the matrix2 class, will it
> > just "flow" down into the more specialized versions of kernel()
> > further down the hierarchy through the **kwords mechanism?
>
> Yes.
>
> >  Do I need
> > to be more careful?  I've learned enough to know that **kwords is a
> > dictionary, but I don't have much experience in its use.
>
> Yes, but of course still every function down the hierarchy will need
> to know how to handle the new option.
>
> > 5.  Similar to the previous question, the results of these
> > computations are cached.  I'm thinking that the names of these items
> > might be something like  "right_kernel_pivot_basis" (as opposed to say
> > "right_kernel_unechelonized"), to allow for future alternatives.
>
> Makes sense.
>
> > 6.  Finally, I know patches should be in bite-size chunks.  Would it
> > be wiser to start at the bottom of the class heirarchy
> > (matrix_integer_dense) or at the top (matrix2)? This presumes it is
> > possible to get meaningful (partial) changes in either case.
>
> Probably the top.  I'm not sure.
>
> William
--~--~---------~--~----~------------~-------~--~----~
To post to this group, send email to sage-devel@googlegroups.com
To unsubscribe from this group, send email to 
sage-devel-unsubscr...@googlegroups.com
For more options, visit this group at http://groups.google.com/group/sage-devel
URLs: http://www.sagemath.org
-~----------~----~----~----~------~----~------~--~---

Reply via email to