Since Maxima is free and open source and gpl, why not just read the algorithm implemented there and rewrite it in Python?
RJF \ On Monday, November 11, 2019 at 1:29:56 AM UTC-8, Emmanuel Charpentier wrote: > > Dear Vincent, > > a very quick answer (limbic system level. :-). Thank you for this hint. It > seems really interesting. But I'll need time to explore it and find my way. > > Of course, I will have to work on PolynomialRing(SR) in order to be able > to work on one variable while ignoring the rest... > > I'll keep you posted. > > Le lundi 11 novembre 2019 09:28:58 UTC+1, Vincent Neiger a écrit : >> >> Dear Emmanuel, >> >> You may be interested in taking a look at the following function: >> Matrix_polynomial_dense.minimal_approximant_basis >> >> This only supports the univariate case. This solves a problem which >> generalizes Padé approximation (the documentation gives a precise >> description of what it computes; taking notation from there, instead of >> thinking in terms of polynomials you may view the matrix F as having power >> series entries, with the column j truncated at order d_j ). >> >> Coming back to our problem: if you have a power series S(x) at some order >> d and you want to find a Padé approximation f(x) / g(x) of it, you can find >> it by calling the above function on the 2 x 1 polynomial matrix >> [[S(x).polynomial()], [-1]] with the input order being d. Specifically on >> this input the above function will return a 2 x 2 matrix of univariate >> polynomials, and its first row will be [[g(x) , f(x)]], your sought >> approximant. >> >> By default you will get the approximant with f and g of degree about d/2 >> and deg(g) > deg(f) (well, at least on typical instances), but by >> manipulating the optional argument "shift" you can get any other type that >> you want. This shift is basically equivalent to the notion of "defects" >> often found in the literature on approximants. >> >> This is based on an algorithm described by Van Barel and Bultheel and by >> Beckermann and Labahn in the early 1990s. A much faster algorithm exists, >> using a divide and conquer approach and fast polynomial multiplication, but >> for it to be interesting we would need Sage to have a faster polynomial >> matrix multiplication (for the moment this faster algorithm is not really >> interesting except for small matrix dimensions... so it could actually >> bring substantial speedups for this 2 x 1 case). >> >> As you can guess from the documentation of the above function, it has >> been mainly designed with an "exact arithmetic" context in mind, typically >> working over a finite field or the rationals. So it is my turn to write >> that I would be very interested in reading your comments on how this >> existing method behaves in your context. >> >> >> Le dimanche 10 novembre 2019 14:32:52 UTC+1, Emmanuel Charpentier a >> écrit : >>> >>> Dear list, >>> >>> IMHO, Sage may use an implementation of Padé approximants (similar t its >>> implementation of Taylor series), if only for numerical use reasons. >>> Currently, this can be done: >>> >>> - by wrapping the Maxima functions taylor and pade (Maxima's pade >>> needs a Maxima taylor development object, which does not cleanly >>> translate to Sage); >>> - by using the PowerSeriesRing.pade method. >>> >>> Some trials have shown that the latter method, as advertised in its >>> documentation, is indeed unsuitable even for moderately large degrees of >>> the numerator and denominator: the expression thus obtained are extremely >>> unwieldy large and are slow to evaluate numerically. >>> In contrast, the algorithm used by Maxima, gives easily usable results >>> (even if they can be enhanced by expansion and possibly factorization >>> and/or simplification). But using it worsens our dependence on Maxima. >>> Hence, a couple of questions: >>> >>> *Algorithms:* >>> >>> Do you have pointers to better algorithms for Padé approximants >>> computation (especially for the multivariate case ? (These might also be >>> helpful in the implementation of PowerSeriesRing.pade ...) >>> >>> *User interface:* >>> >>> We can follow our current taylor() convention, which is a bit of a >>> straightjacket in the multivariate case,imposing the same degree for all >>> the developments wrt the different variables. >>> We can also allow so specify different degrees for the development wrt >>> the different variables (this can make sense for very asymetric functions). >>> >>> Suggestions ? >>> >>> *Multivariate case:* >>> >>> An elementary implementation (see (Huard and Guillaume, 2000) >>> <https://www.sciencedirect.com/science/article/pii/S037704270000337X> >>> for example) is to iteratively create a Padé approximant for the successive >>> variables. i. e. if p(f, x) denotes the Pade approximant wrt the >>> variable x, you end up computing (p(p(p(f,x),y),z) (the implementation >>> is trivial). The paper I quoted hints that there are better solutions, but >>> is a bit above my pay grade (my initial formation is in dentistry and >>> surgery...). >>> >>> Do you have suggestions on this point ? >>> >>> More generally, any comments are welcome ! >>> >>> -- You received this message because you are subscribed to the Google Groups "sage-devel" group. To unsubscribe from this group and stop receiving emails from it, send an email to sage-devel+unsubscr...@googlegroups.com. To view this discussion on the web visit https://groups.google.com/d/msgid/sage-devel/c423ad5c-cd8a-4c0f-80b8-1673a231eaab%40googlegroups.com.