Luc Maisonobe wrote:
>
> Ted Dunning a écrit :
>> As a start, I'd like to discourage the use of a solid implementation for
>>> SparseReal{Vector, Matrix}... please prefer an interface approach,
>>> allowing
>>> implementations based on the Templates project:-
>
> Maybe SparseRealMatrix was a bad name and should have been
> SimpleSparseRealMatrix to avoid confusion with other sparse storage and
> dedicated algorithms.
>
I give a +1 for renaming SparseReal{Matrix, Vector}! These names should be
reserved for interfaces (which might be method-less) indicating that the
implementation storage needs to be sparse.
Luc Maisonobe wrote:
>
>> Can you say something about the licensing issues if we were to explore,
>> for
>> discussion sake, MTJ being folded into commons-math? MTJ is LGPL while
>> commons has to stay Apache licensed. This licensing issue has been the
>> biggest sticking point in the past.
>
> This is really an issue. Apache projects cannot use LGPL (or GPL) code.
> See http://www.apache.org/legal/resolved.html for the policy.
>
Solved! See other message. Both myself and (more importantly, because he
wrote MTJ) Bjorn are willing to use Apache license.
Luc Maisonobe wrote:
>
> Adding new dependencies, and especially dependencies that involve native
> libraries is a difficult decision that needs lots of discussion.
>
MTJ depends only on netlib-java, *which does not depend on any native libs*.
The option is there to add native optimised libs if the end user wants to.
Luc Maisonobe wrote:
>
> Some benchmarks I did a few weeks ago showed the new [math] linear
> package implementation was quite fast and compared very well with native
> fortran libraries
>
I'm going to call "foul" here :-)
The Java implementation of netlib-java is just as fast as machine-optimised
BLAS/LAPACK... but only for matrices smaller than roughly 1000 x 1000
elements AND ONLY FOR NORMAL DESKTOPS MACHINES! The important distinction
here is that hardware exists with crazy optimisations for the BLAS/LAPACK
API and having the option to use that architecture from within Java is a
great bonus. Consider, for example, a dedicated GPU (or FPGA) card which
comes with a BLAS/LAPACK binary.
Additionally, the BLAS/LAPACK API is universally accepted. It would be a
mistake to attempt to reproduce all the brain power and agreement that has
worked toward it.
Luc Maisonobe wrote:
>
> I am aware that we still lack lots of very efficient linear algebra
> algorithms. Joining efforts with you would be a real gain if we can
> solve the licensing issues and avoid new dependencies if possible.
>
I am very keen to consolidate efforts! I think the next step is perhaps for
you to have a look through the MTJ API and create a wish-list of everything
you think would make sense to appear in commons-math. Even if adopted
"wholesale", I would still strongly recommend a review of the API. e.g. some
interfaces extend Serializable (a mistake) ; I'm not entirely sure how
relevant the distributed package is nowadays; the Matrix Market IO is
difficult to understand/use ; there should perhaps be a "factory pattern" to
instantiating matrices/vectors.
In the meantime, I recommend holding off a 2.0 API release with any new
linear classes. That way we can stabilise the "new" merged API... releasing
that as part of 2.1.
--
View this message in context:
http://www.nabble.com/commons-math%2C-matrix-toolkits-java-and-consolidation-tp23537813p23574363.html
Sent from the Commons - Dev mailing list archive at Nabble.com.
---------------------------------------------------------------------
To unsubscribe, e-mail: dev-unsubscr...@commons.apache.org
For additional commands, e-mail: dev-h...@commons.apache.org