Le 14/10/2011 20:08, Greg Sterijevski a écrit :
I looked more closely at the package and it am impressed with the breadth of
material covered. Moreover, this package will do to finance what Mahout is
doing to companies like SAS and SPSS. Having spent a good part of my career
in finance, this package (and others) will put a lot of small 'analytics'
companies out of business (as well as people like me). There is excellent
coverage of pricing models (options and otherwise), fitting techniques (as
it pertains to term structures, volatility surfaces and so forth), but the
most impressive thing is that all of the functionality is presented in
framework. You can integrate what is called front office functionality
(trading, pricing, hedging) with middle and back office (risk and clearing)
operations. Most closed source packages are not this good.

So, a bit amateurish on the 'optimization' front, but the big picture is
very strong. Comparable closed source packages like this would run upward of
1M USD.

-Greg

On Fri, Oct 14, 2011 at 11:24 AM, Phil Steitz<phil.ste...@gmail.com>  wrote:

On 10/14/11 7:47 AM, Emmanuel Bourg wrote:
Hi,

I just saw this article, that might be of interest for some of the
[math] devs. They claim to have found an optimization that is 1.6
times faster than Commons Math :

http://www.opengamma.com/blog/2011/10/14/maths-library-development

Thanks for sharing this.  We can certainly look at the specific case
and evaluate pros and cons; but it looks like there may be some
interesting areas for collaboration.  The license and CLA-type
thingy look compatible and it looks like there are [math]
dependencies and maybe some [math] source already used in there, so
it would be great to get some collaboration going.

Yes, this would be a very good thing. Our linear algabra is not fast, we already know that. I think we need to address several different needs like simple API, numerical accuracy, robustness and speed. We concentrated on robustness up to now, mainly because our SVD was really really bad on this point. Now that this part seems to be OK, I would very much like to see some speed improvements (and sparse matrices too, as we also suck at this).

Our block matrices implementation seems to improve some operations computation time, but it has a very awkward design and is not backed by any theoretical studies so it is probably also amateurish with respect to cache. I designed and wrote it but would really be happy to get rid of it and have it replaced by something more efficient. I would also like to know how it behaves in the article bench.

So a big +1 for collaboration if the author is ready for this.

Luc


Phil

Emmanuel Bourg

---------------------------------------------------------------------
To unsubscribe, e-mail: dev-unsubscr...@commons.apache.org
For additional commands, e-mail: dev-h...@commons.apache.org




---------------------------------------------------------------------
To unsubscribe, e-mail: dev-unsubscr...@commons.apache.org
For additional commands, e-mail: dev-h...@commons.apache.org





---------------------------------------------------------------------
To unsubscribe, e-mail: dev-unsubscr...@commons.apache.org
For additional commands, e-mail: dev-h...@commons.apache.org

Reply via email to