[ 
https://issues.apache.org/jira/browse/FLINK-2157?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14618323#comment-14618323
 ] 

ASF GitHub Bot commented on FLINK-2157:
---------------------------------------

Github user tillrohrmann commented on a diff in the pull request:

    https://github.com/apache/flink/pull/871#discussion_r34132488
  
    --- Diff: 
flink-staging/flink-ml/src/main/scala/org/apache/flink/ml/recommendation/ALS.scala
 ---
    @@ -425,6 +434,34 @@ object ALS {
         }
       }
     
    +  implicit val evaluateRatings = new EvaluateDataSetOperation[ALS, (Int, 
Int, Double), Double] {
    +    override def evaluateDataSet(
    +        instance: ALS,
    +        evaluateParameters: ParameterMap,
    +        testing: DataSet[(Int, Int, Double)]): DataSet[(Double, Double)] = 
{
    --- End diff --
    
    return type in next line


> Create evaluation framework for ML library
> ------------------------------------------
>
>                 Key: FLINK-2157
>                 URL: https://issues.apache.org/jira/browse/FLINK-2157
>             Project: Flink
>          Issue Type: New Feature
>          Components: Machine Learning Library
>            Reporter: Till Rohrmann
>            Assignee: Theodore Vasiloudis
>              Labels: ML
>             Fix For: 0.10
>
>
> Currently, FlinkML lacks means to evaluate the performance of trained models. 
> It would be great to add some {{Evaluators}} which can calculate some score 
> based on the information about true and predicted labels. This could also be 
> used for the cross validation to choose the right hyper parameters.
> Possible scores could be F score [1], zero-one-loss score, etc.
> Resources
> [1] [http://en.wikipedia.org/wiki/F1_score]



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Reply via email to