[ 
https://issues.apache.org/jira/browse/FLINK-1979?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14537855#comment-14537855
 ] 

ASF GitHub Bot commented on FLINK-1979:
---------------------------------------

Github user thvasilo commented on the pull request:

    https://github.com/apache/flink/pull/656#issuecomment-100884364
  
    Thank you Johaness. The optimization code has been merged to the master 
now, so could you rebase your branch to the latest master so we can look at the 
changes in an isolated way?
    
    You can take a look at the [How to 
contibute](http://flink.apache.org/how-to-contribute.html#contributing-code-&-documentation)
 guide on how to do this. The merges you have currently make it hard to review 
the code.
    
    Also, please make sure all your classes have docstrings, you can take the 
docstring for SquaredLoss as an example (i.e. one sentence is usually enough).
    
    Documentation is always welcome of course, so if you want to add some more 
details to the loss functions section of the ML documentation 
(docs/libs/ml/optimization.md) feel free to do so in this PR.
    
    Let me know if you run into any problems.


> Implement Loss Functions
> ------------------------
>
>                 Key: FLINK-1979
>                 URL: https://issues.apache.org/jira/browse/FLINK-1979
>             Project: Flink
>          Issue Type: Improvement
>          Components: Machine Learning Library
>            Reporter: Johannes Günther
>            Assignee: Johannes Günther
>            Priority: Minor
>              Labels: ML
>
> For convex optimization problems, optimizer methods like SGD rely on a 
> pluggable implementation of a loss function and its first derivative.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Reply via email to