On May 2, 8:33 am, Xah Lee <[EMAIL PROTECTED]> wrote: <snip>
> As i have indicated in my post, it is non-trivial to implement a > function that returns the positive angle of a vector. For example, it > can be done with sign checking of the coordinate components (in total > 4 cases, which can be done as 2 levels of nesting if, or simply 4 > if.), and or the evaluation of Min[Abs[ArcCos[x],Abs[ArcSin[x]]], or > use clever ways with dot product, or ArcTan. It is not a trivial to > know which algorithm is in general more efficient. (this is important, > since finding the angle of a vector is a basic function, that may > needs to be called millions times directly or indirectly) Further, > consider the inverse trig function, it is likely 99.99% of people with > a PH D in math wouldn't know how these are actually implemented. So, > the question of whether calling one of the inverse trig function is > more robust or efficient than another is a open question. And, besides > the algorithmic level, the question also entails how the language > actually implement the inverse trig functions. <snip> "We should forget about small efficiencies, say about 97% of the time: premature optimization is the root of all evil." The question you are asking depends a great deal on other factors outside of the coding environment such as the compiler and the hardware. If you are coding for a specific language/compiler/hardware combination, all you need do is profile different versions of your code until you're happy with the results. -- http://mail.python.org/mailman/listinfo/python-list