Hi Dan, Thanks for reviewing the code. I have included your suggestions and I have done a spell check on the file now and will submit the updated version to trac once I get an account. (Michael?)
As a side remark concerning the prec argument, it is passed straight into the sqrt() functions used in the algorithms and not used in any other way. I am not much of a sage expert to know whether this is sensible or whether there are better options available by now. Concerning other algorithms for evaluating these symbols using finite precision arithmetic, it has been my experience that most algorithms loose precision pretty quickly. In addition to the algorithms used in the paper I have also evaluated a number of other routines given to me privately by other physicists and the conclusion is: they all suffer from the cancellation effects of alternating sums of large numbers even for moderatly low j values from around 15. The only exception is the Schulten and Gordon paper where they have gone to extraordinary length to evaluate which recursion relations are stable for which parameter values and even then did the results have to be renormalised. I have therefore compared speed only to those routines in the paper and found that the speed improvement is up to 40 times for Gaunt coefficients, but only about 3-5 for 6j symbols. The latter is likely due to the fact that due to the number of triangle relations the recursion relations are much shorter and therefore faster to evaluate. However, I have not redone these tests recently and on 64bit machines, so I don't know what modern machines might give. More recently people have suggested evaluating the symbols using prime factor decomposition, essentially doing exact rational number arithmetic. The problem here is that the factorial represenation is always limited and for the very large factorials approximations are used which totally kills the result. A nice example here is the calculator at Cambridge University: http://www-stone.ch.cam.ac.uk/wigner.shtml Try a $3j$ with (18,18,18,0,0,0) I furthermore get the impression that these algorithms are not very fast, but I have not done a detailed speed evalutation. For finite precision symbol calculations the only code that I have found that is reliable is the Schulten and Gordon one. For specialised applications it might indeed be advisable to save only subsets of the symbols or use precise formulas for special parameter values. But these are optimisations and do not provide generic random access values which I have tried to address in my paper. I will try to get the storage codes published soon. regards Jens --~--~---------~--~----~------------~-------~--~----~ To post to this group, send email to sage-devel@googlegroups.com To unsubscribe from this group, send email to sage-devel-unsubscr...@googlegroups.com For more options, visit this group at http://groups.google.com/group/sage-devel URLs: http://www.sagemath.org -~----------~----~----~----~------~----~------~--~---