I would make the convergence criterion a parameter which is not mandatory
for all Predictors. If you implement an iterative Predictor, then you can
define a setConvergenceCriterion method or pass the convergence criterion
to the Predictor via the ParameterMap.
You can also open a JIRA issue for th
>
> I think Sachin wants to provide something similar to the LossFunction but
> for the convergence criterion. This would mean that the user can specify a
> convergence calculator, for example to the optimization framework, which is
> used from within a iterateWithTermination call
>
@Till, yes. Th
>
> Am I correct to assume that by "user" you mean library developers here?
> Regular users who just use the API are unlikely to write their own
> convergence
> criterion function, yes? They would just set a value, for example the
> relative
> error change in gradient descent, perhaps after choosin
I think Sachin wants to provide something similar to the LossFunction but
for the convergence criterion. This would mean that the user can specify a
convergence calculator, for example to the optimization framework, which is
used from within a iterateWithTermination call.
I think this is a good id
>
> The point is to provide user with the solution before an iteration and
>
Am I correct to assume that by "user" you mean library developers here?
Regular users who just use the API are unlikely to write their own
convergence
criterion function, yes? They would just set a value, for example the
Sure.
Usually, the convergence criterion can be user defined. For example, for a
linear regression problem, user might want to run the training until the
relative change in squared error falls below a specific threshold, or the
weights fail to shift by a relative or absolute percentage.
Similarly,
Hello Sachin,
could you share the motivation behind this? The iterateWithTermination
function provides us with a means of checking for convergence during
iterations, and checking for convergence depends highly on the algorithm
being implemented. It could be the relative change in error, it could
d
Hi all
I'm trying to work out a general convergence framework for Machine Learning
Algorithms which utilize iterations for optimization. For now, I can think
of three kinds of convergence functions which might be useful.
1. converge(data, modelBeforeIteration, modelAfterIteration)
2. converge(data,