If you can point me to previous benchmarks that are done, I would like to
use smoothing and see if the LBFGS convergence improved while not impacting
linear svc loss.
Thanks.
Deb
On Dec 16, 2017 7:48 PM, "Debasish Das" wrote:
Hi Weichen,
Traditionally svm are solved using quadratic programming
Hi Weichen,
Traditionally svm are solved using quadratic programming solvers and most
likely that's why this idea is not so popular but since in mllib we are
using smooth methods to optimize linear svm, the idea of smoothing svm loss
become relevant.
The paper also mentions kernel svm using the s
Hi Deb,
Which library or paper do you find to use this loss function in SVM ?
But I prefer the implementation in LIBLINEAR which use coordinate descent
optimizer.
Thanks.
On Sun, Dec 17, 2017 at 6:52 AM, Yanbo Liang wrote:
> Hello Deb,
>
> To optimize non-smooth function on LBFGS really shoul
Hello Deb,
To optimize non-smooth function on LBFGS really should be considered carefully.
Is there any literature that proves changing max to soft-max can behave well?
I’m more than happy to see some benchmarks if you can have.
+ Yuhao, who did similar effort in this PR:
https://github.com/apac
Hi,
I looked into the LinearSVC flow and found the gradient for hinge as
follows:
Our loss function with {0, 1} labels is max(0, 1 - (2y - 1) (f_w(x)))
Therefore the gradient is -(2y - 1)*x
max is a non-smooth function.
Did we try using ReLu/Softmax function and use that to smooth the hinge
los