Hi.
Le mer. 28 déc. 2022 à 16:53, Eric Bresie a écrit :
>
> Would fork/cloning from the following and raise a PR work to provide the
> code?
>
> https://github.com/apache/commons-math
Yes.
>
> Although there does appear to be a few PRs waiting to be merged there
> presently.
Well, if there is
Would fork/cloning from the following and raise a PR work to provide the
code?
https://github.com/apache/commons-math
Although there does appear to be a few PRs waiting to be merged there
presently.
Eric
On Tue, Dec 27, 2022 at 5:02 PM Gilles Sadowski
wrote:
> Hi.
>
> Le mar. 27 déc. 2022 à 2
Hi.
Le mar. 27 déc. 2022 à 23:30, François Laferrière
a écrit :
>
> Hello,
> I am finally done with a draft version of multivariate optimizers (gradient
> descent, Newton-Raphson, BFGS) that fits the legacy API design and class
> hierarchy.
Thanks!
> The new code is compliant with checkstyle
Hello,
I am finally done with a draft version of multivariate optimizers (gradient
descent, Newton-Raphson, BFGS) that fits the legacy API design and class
hierarchy.
The new code is compliant with checkstyle rule.
My question is : "what's next?".
I suppose that I should have access to a specifi
Hello.
I haven't looked at the code, but thanks for your interest in contributing
to Commons Math.
The more straightforward path is indeed to adapt your code to what is
currently in the "...legacy" module. However, you can really consider
the other venue, i.e. a design specific to this family of
Hello Alex,
Ichecked the architecture of MultivariateOptimizer family to compareto what I
have done so far. I think that what I have done can berefactored to fit in the
general framework as extension ofGradientMultivariateOptimizer even though,
main differences are :
-
There is no ne
Hi,
Thanks for the interest in Commons Math.
Currently all the optimisation code is in commons-math-legacy. I think
the gradient based methods would fit in:
org.apache.commons.math4.legacy.optim.nonlinear.scalar.gradient
Can your implementations be adapted to work with the existing
interfaces?
Hello,
Sorry, previous message was a mess
Based on Apache common math, I have implemented some commonplace optimization
algorithms that could be integrated in ACM. This includes
- Gradient Descent (en.wikipedia.org/wiki/Gradient_descent)
- Newton Raphson
(https://en.wikipedia.or