Hi Golam, I'm replying to this e-mail so I can answer each of your points below easily. I was very busy when you sent this message to write a proper reply.
On Sun, 19 Jul 2009 13:08:28 -0300 Golam Mortuza Hossain <gmhoss...@gmail.com> wrote: > > Hi, > > I have spent considerable amount of time in last one month > working with new symbolics. Overall, I am impressed with > it. > > However, my experience with new derivative makes me > wonder whether the pynac "fderivative" construct is really > worth the efforts! Please see my arguments below. > While implementing functional derivative and integration > algorithm for generalized function using new symbolics, I > have been brought to near a dead end because of new > derivative. Many thanks for all your efforts and persistence. I agree that there are lots of rough edges in the current (new) symbolics subsystem, and working with it is rather frustrating. We definitely need more help to complete the transition to pynac and move forward. > It.... > > (1) Breaks substitution: > > Arguments of derivative can't be substituted > > http://trac.sagemath.org/sage_trac/ticket/6480 Thanks to Mike Hansen's SubstituteFunction converter, at the moment you can do this: sage: f = function('f') sage: t1 = f(x).derivative(x) sage: t1.substitute_function(f, g) D[1](g)(x) (I have the typesetting changes at #6344 applied.) Unfortunately, the cases sage: t1 = f(x+y).derivative(x) and sage: t1.substitute_function(f, f+g) fail for different reasons. IIRC, the first one is caused by a check put in to limit which expressions we convert to Maxima before calling its differential equation solver. This check should be moved closer to the interface since there is no reason to limit the arguments of the D[...](...) construct. The second one fails because we don't allow arithmetic with symbolic functions yet, so the last argument f+g cannot be evaluated. I think we should just implement arithmetic, though an easier fix is to come up with a way to give arguments to .substitute_function() without requiring the arithmetic. One way to solve the problem of arithmetic with symbolic functions is to integrate them with the current callable symbolic functions. E.g., things you get by doing: sage: f(x,y) = 2*x + y sage: f (x, y) |--> 2*x + y We could either use the existing CallableSymbolicExpressionRing implementation and force the user to give names to the arguments, to get something like: sage: f (x,y) -> f(x,y) sage: g (x,y) -> g(x,y) sage: f+g (x,y) -> f(x,y) + g(x,y) Or we define a new parent for these, and let them have variable number of unnamed arguments, keeping the current behavior. This would also let us do this (from Maple): > (f+g)(x); f(x) + g(x) > (f+g)(x,y); f(x, y) + g(x, y) Note that MMA doesn't seem to support this (or I don't know the syntax for it): In[10]:= (f+g)[x] Out[10]= (f + g)[x] > (2) Nightmare for writing integration algorithm: > > If h = f(g(x)).diff(x) then integrate(h, x) is trivial. However, in > new symbolics to do so, one needs compute > > integrate( D[0](f)(g(x, y))*D[0](g)(x, y), x) > > Let me claim: Integrating an expression involving new symbolic > derivative is at best EQUAL and often MORE computationally EXPENSIVE > than its "diff" counterpart. Are you saying that if we stop evaluating (partial) derivatives, integrating them would be easier? :) We just need to implement a simple heuristic to handle the example above. If you follow the link to the Algorithms in Computer Algebra book by Geddes, Czapor and Labahn in the symbolics wiki page http://wiki.sagemath.org/symbolics (second item under the title Integration) You'll see this text at the end of page 473: [...] When the above methods [table lookup] fail, MAPLE uses a form of substitution called the "derivative-divides" method. This method examines the integrand to see if it has a composite function structure. If this is the case, it then attempts to substitute for any composite functions, f(x), by dividing its derivative into the integrand and checking if the result is independent of x after the substitution u = f(x) occurs. Some examples of what other systems do: Mathematica 7.0 for Linux x86 (32-bit) Copyright 1988-2008 Wolfram Research, Inc. In[1]:= t = D[f[g[x]],x] Out[1]= f'[g[x]] g'[x] In[2]:= Integrate[t,x] Out[2]= f[g[x]] Note that MMA doesn't store any more information than we do about the derivative: In[3]:= FullForm[t] Out[3]//FullForm= Times[Derivative[1][f][g[x]], Derivative[1][g][x]] |\^/| Maple 12 (IBM INTEL LINUX) ._|\| |/|_. Copyright (c) Maplesoft, a division of Waterloo Maple Inc. 2008 \ MAPLE / All rights reserved. Maple is a trademark of <____ ____> Waterloo Maple Inc. | Type ? for help. > t:=diff(f(g(x)),x); /d \ t := D(f)(g(x)) |-- g(x)| \dx / > integrate(t,x); f(g(x)) I don't know how to ask Maple for the internal representation of t, but I doubt if it is different from what MMA, or we do. > > (4) Causes Maxima interface to break: > > http://trac.sagemath.org/sage_trac/ticket/6376 This is a serious bug in the maxima interface. It has nothing to do with how we denote derivatives, if we use partial derivaties or unevaluated ones. Patches are welcome. > (4) Gives mathematically non-sensical results: > > http://trac.sagemath.org/sage_trac/ticket/6465 This is also an independent problem. With recent changes in pynac, you stated that you fixed this: http://groups.google.com/group/sage-devel/msg/765378d6b303cb85 > (5) Looses information irrecoverably: > > From "D[0](f)(x-a)" its not possible to decide whether original > variable of differentiation was "x" as in f(x-a).diff(x) or "a" > as in -f(x-a).diff(a). This again affects integration algorithm. What is the lost information in this case? D[0](f)(x-a) is both f(x-a).derivative(x) and -f(x-a).derivative(a). sage: f(x-a).derivative(x) D[1](f)(-a + x) sage: -f(x-a).derivative(a) D[1](f)(-a + x) Where do you need the variable? If we had a proper inverse for differentiation you could recover both of these easily. In[7]:= u = Derivative[1][f][x-a] Out[7]= f'[-a + x] In[8]:= Integrate[u, a] Out[8]= -f[-a + x] In[9]:= Integrate[u, x] Out[9]= f[-a + x] Unfortunately, the integrate command in Sage fails miserably on the above examples. > (6) Compact? > > It is true that this format is sometime compact but consider > the counter example: > ------ > sage: f( g(x) + h(x) ).diff(x) > (D[0](g)(x) + D[0](h)(x))*D[0](f)(g(x) + h(x)) > ------ > > In old symbolics it takes less space to print > ----- > sage: f( g(x) + h(x) ).diff(x) > diff(f(h(x) + g(x)), x, 1) > ----- > > > (7) Printing issues: > > We are still debating on this in separate thread. The debate seems to be over. There is a patch at #6344. Can someone please review it? > My question now is it really worth solving all of the > above issue to keep working with fderivative of pynac? As you can see from the output from other systems above, they use a similar notation and data structure to represent partial derivatives. I think it's worth considering why they chose to do things that way. > Or should we just restore old "diff" by simply sub-classing it > from SFunction like what is being done for "integration" > and others? IIRC, you wrote that your implementation can coexist with the current one in Sage. Why don't you submit your changes so people can try out both approaches? Maple also has an inert "Diff" operator, your implementation can be the Sage equivalent. Thanks. Burcin --~--~---------~--~----~------------~-------~--~----~ To post to this group, send an email to sage-devel@googlegroups.com To unsubscribe from this group, send an email to sage-devel-unsubscr...@googlegroups.com For more options, visit this group at http://groups.google.com/group/sage-devel URLs: http://www.sagemath.org -~----------~----~----~----~------~----~------~--~---