[ 
https://issues.apache.org/jira/browse/FLINK-1979?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15315212#comment-15315212
 ] 

ASF GitHub Bot commented on FLINK-1979:
---------------------------------------

Github user skavulya commented on the issue:

    https://github.com/apache/flink/pull/1985
  
    @chiwanpark Decoupling the gradient descent step is complicated for L1 
regularization because we are using the proximal gradient method that applies 
soft thresholding after executing the gradient descent step. I left the 
regularization penalty as-is. I am thinking of adding an additional method that 
adds the regularization penalty to gradient without the gradient descent step 
but I will do it in the L-BFGS PR instead.


> Implement Loss Functions
> ------------------------
>
>                 Key: FLINK-1979
>                 URL: https://issues.apache.org/jira/browse/FLINK-1979
>             Project: Flink
>          Issue Type: Improvement
>          Components: Machine Learning Library
>            Reporter: Johannes Günther
>            Assignee: Johannes Günther
>            Priority: Minor
>              Labels: ML
>
> For convex optimization problems, optimizer methods like SGD rely on a 
> pluggable implementation of a loss function and its first derivative.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Reply via email to