[EMAIL PROTECTED] <[EMAIL PROTECTED]> wrote: > I've been told that Both Fortran and Python are easy to read, and are > quite useful in creating scientific apps for the number crunching, but
Incidentally, and a bit outside what you asked: if your "number crunching" involves anything beyond linear systems, run, don't walk, to get Forman Acton's "Real Computing Made Real", <http://www.amazon.com/Real-Computing-Made-Engineering-Calculations/dp/0 486442217/ref=ed_oe_p/002-1610918-5308009> -- the original hardback was a great buy at $50, and now you can have the Dover paperback at $12.44 -- a _steal_, I tell you! You don't need any programming background (and there's no code in the book): you need good college-level maths, which is all the book uses (in abundance). It doesn't teach you the stuff you can easily find on the web, such as, say, Newton's method for the numerical solution of general equations: it teaches you to _understand_ the problems you're solving, the foibles of numerical computation, how to massage and better condition your equations not to run into those foibles, when to apply the classic methods (those you can easily find on the web) and to what versions of your problems _and why_, etc, etc. It may be best followed with a cheap calculator (and lots of graph paper, a pencil, and a good eraser:-), though Python+matplotlib will be OK and Fortran+some equivalent plotting library should be fine too. Let me just give you my favorite, _simplest_ example... say we want to find the two values of x that solve the quadratic: a x**2 + b x + c x = 0 Looks like a primary/middle school problem, right? We all know: x = ( -b +/- sqrt(b**2 - 4 a c) ) / 2 a so we verify that b**2 > 4 a c (to have two real solutions), take the square root, then do the sum and the difference of the square root with - b, and divide by 2 a. Simple and flawless, right?! Whoa, says Acton! What if 4 a c is much smaller than b**2? Then that sqrt will be very close to b -- and inevitably, depending on the sign of b, either the sum or the difference will be a difference between two numbers that are VERY close to each other. Such operations are the foremost poison of numeric computation! When you take the difference between numbers that are very close, you're dropping significant digits all over the place: one of your roots will be well computed, the other one may well be a disaster. Solution: take EITHER the sum OR the difference -- specifically, the one that is actually a sum, not a difference, depending on the sign of b. That gives you one root with good precision. Then, exploit the fact that the product of the roots is c/a - compute the other root by dividing this constant by the root you've just computed, and the precision will be just as good for the other root, too. Sure, this is a trick you expect to be already coded in any mathematical library, in the function that solves quadratics for you -- exactly because it's so simple and so general. But the key point is, far too many people doing "number crunching" ignore even such elementary issues -- and too many such issues are far too idiosyncratic to specific equations, quadratures, etc, to be built into any mathematical library you may happen to be using. Acton's book should be mandatory reading for anybody who'll ever need to "crunch numbers"...!-) (I've taught "Numerical Analysis" to undergrad level engineers, and while I know I've NOT done anywhere as good a job as Acton does, even forearming my students with 1/4th of the techniques so well covered in the book, did, in my opinion, make them better "computors", to borrow Acton's term, than any of their peers... _I_ was never thought any of this precious, indispensable stuff in college, I had to pick it all up later, in the school of hard knocks!-). Alex -- http://mail.python.org/mailman/listinfo/python-list